Sample records for allowing large scale

  1. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  2. Wedge measures parallax separations...on large-scale 70-mm

    Treesearch

    Steven L. Wert; Richard J. Myhre

    1967-01-01

    A new parallax wedge (range: 1.5 to 2 inches) has been designed for use with large-scaled 70-mm. aerial photographs. The narrow separation of the wedge allows the user to measure small parallax separations that are characteristic of large-scale photographs.

  3. Cram

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, T.

    2014-08-29

    Large-scale systems like Sequoia allow running small numbers of very large (1M+ process) jobs, but their resource managers and schedulers do not allow large numbers of small (4, 8, 16, etc.) process jobs to run efficiently. Cram is a tool that allows users to launch many small MPI jobs within one large partition, and to overcome the limitations of current resource management software for large ensembles of jobs.

  4. On large-scale dynamo action at high magnetic Reynolds number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk

    2014-07-01

    We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less

  5. The Value of Large-Scale Randomised Control Trials in System-Wide Improvement: The Case of the Reading Catch-Up Programme

    ERIC Educational Resources Information Center

    Fleisch, Brahm; Taylor, Stephen; Schöer, Volker; Mabogoane, Thabo

    2017-01-01

    This article illustrates the value of large-scale impact evaluations with counterfactual components. It begins by exploring the limitations of small-scale impact studies, which do not allow reliable inference to a wider population or which do not use valid comparison groups. The paper then describes the design features of a recent large-scale…

  6. Large-scale anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, J.; Wilson, M. L.

    1981-01-01

    Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.

  7. Mapping the universe in three dimensions

    PubMed Central

    Haynes, Martha P.

    1996-01-01

    The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin. PMID:11607714

  8. Mapping the universe in three dimensions.

    PubMed

    Haynes, M P

    1996-12-10

    The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble's law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.

  9. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    PubMed

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Non-linear scale interactions in a forced turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Duvvuri, Subrahmanyam; McKeon, Beverley

    2015-11-01

    A strong phase-organizing influence exerted by a single synthetic large-scale spatio-temporal mode on directly-coupled (through triadic interactions) small scales in a turbulent boundary layer forced by a spatially-impulsive dynamic wall-roughness patch was previously demonstrated by the authors (J. Fluid Mech. 2015, vol. 767, R4). The experimental set-up was later enhanced to allow for simultaneous forcing of multiple scales in the flow. Results and analysis are presented from a new set of novel experiments where two distinct large scales are forced in the flow by a dynamic wall-roughness patch. The internal non-linear forcing of two other scales with triadic consistency to the artificially forced large scales, corresponding to sum and difference in wavenumbers, is dominated by the latter. This allows for a forcing-response (input-output) type analysis of the two triadic scales, and naturally lends itself to a resolvent operator based model (e.g. McKeon & Sharma, J. Fluid Mech. 2010, vol. 658, pp. 336-382) of the governing Navier-Stokes equations. The support of AFOSR (grant #FA 9550-12-1-0469, program manager D. Smith) is gratefully acknowledged.

  11. Chronic, Wireless Recordings of Large Scale Brain Activity in Freely Moving Rhesus Monkeys

    PubMed Central

    Schwarz, David A.; Lebedev, Mikhail A.; Hanson, Timothy L.; Dimitrov, Dragan F.; Lehew, Gary; Meloy, Jim; Rajangam, Sankaranarayani; Subramanian, Vivek; Ifft, Peter J.; Li, Zheng; Ramakrishnan, Arjun; Tate, Andrew; Zhuang, Katie; Nicolelis, Miguel A.L.

    2014-01-01

    Advances in techniques for recording large-scale brain activity contribute to both the elucidation of neurophysiological principles and the development of brain-machine interfaces (BMIs). Here we describe a neurophysiological paradigm for performing tethered and wireless large-scale recordings based on movable volumetric three-dimensional (3D) multielectrode implants. This approach allowed us to isolate up to 1,800 units per animal and simultaneously record the extracellular activity of close to 500 cortical neurons, distributed across multiple cortical areas, in freely behaving rhesus monkeys. The method is expandable, in principle, to thousands of simultaneously recorded channels. It also allows increased recording longevity (5 consecutive years), and recording of a broad range of behaviors, e.g. social interactions, and BMI paradigms in freely moving primates. We propose that wireless large-scale recordings could have a profound impact on basic primate neurophysiology research, while providing a framework for the development and testing of clinically relevant neuroprostheses. PMID:24776634

  12. Porous microwells for geometry-selective, large-scale microparticle arrays

    NASA Astrophysics Data System (ADS)

    Kim, Jae Jung; Bong, Ki Wan; Reátegui, Eduardo; Irimia, Daniel; Doyle, Patrick S.

    2017-01-01

    Large-scale microparticle arrays (LSMAs) are key for material science and bioengineering applications. However, previous approaches suffer from trade-offs between scalability, precision, specificity and versatility. Here, we present a porous microwell-based approach to create large-scale microparticle arrays with complex motifs. Microparticles are guided to and pushed into microwells by fluid flow through small open pores at the bottom of the porous well arrays. A scaling theory allows for the rational design of LSMAs to sort and array particles on the basis of their size, shape, or modulus. Sequential particle assembly allows for proximal and nested particle arrangements, as well as particle recollection and pattern transfer. We demonstrate the capabilities of the approach by means of three applications: high-throughput single-cell arrays; microenvironment fabrication for neutrophil chemotaxis; and complex, covert tags by the transfer of an upconversion nanocrystal-laden LSMA.

  13. A 10-year ecosystem restoration community of practice tracks large-scale restoration trends

    EPA Science Inventory

    In 2004, a group of large-scale ecosystem restoration practitioners across the United States convened to start the process of sharing restoration science, management, and best practices under the auspices of a traditional conference umbrella. This forum allowed scientists and dec...

  14. The Diversity of School Organizational Configurations

    ERIC Educational Resources Information Center

    Lee, Linda C.

    2013-01-01

    School reform on a large scale has largely been unsuccessful. Approaches designed to document and understand the variety of organizational conditions that comprise our school systems are needed so that reforms can be tailored and results scaled. Therefore, this article develops a configurational framework that allows a systematic analysis of many…

  15. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  16. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  17. Decoupling local mechanics from large-scale structure in modular metamaterials.

    PubMed

    Yang, Nan; Silverberg, Jesse L

    2017-04-04

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such "inverse design" is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module's design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  18. Decoupling local mechanics from large-scale structure in modular metamaterials

    NASA Astrophysics Data System (ADS)

    Yang, Nan; Silverberg, Jesse L.

    2017-04-01

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  19. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    NASA Astrophysics Data System (ADS)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  20. Large scale particle image velocimetry with helium filled soap bubbles

    NASA Astrophysics Data System (ADS)

    Bosbach, Johannes; Kühn, Matthias; Wagner, Claus

    2009-03-01

    The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.

  1. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  2. Dealing with Item Nonresponse in Large-Scale Cognitive Assessments: The Impact of Missing Data Methods on Estimated Explanatory Relationships

    ERIC Educational Resources Information Center

    Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H.

    2017-01-01

    Competence data from low-stakes educational large-scale assessment studies allow for evaluating relationships between competencies and other variables. The impact of item-level nonresponse has not been investigated with regard to statistics that determine the size of these relationships (e.g., correlations, regression coefficients). Classical…

  3. Skin Friction Reduction Through Large-Scale Forcing

    NASA Astrophysics Data System (ADS)

    Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer

    2017-11-01

    Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.

  4. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  5. Large Eddy Simulation of Gravitational Effects on Transitional and Turbulent Gas-Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Jaberi, Farhad A.

    2001-01-01

    The basic objective of this work is to assess the influence of gravity on "the compositional and the spatial structures" of transitional and turbulent diffusion flames via large eddy simulation (LES), and direct numerical simulation (DNS). The DNS is conducted for appraisal of the various closures employed in LES, and to study the effect of buoyancy on the small scale flow features. The LES is based on our "filtered mass density function"' (FMDF) model. The novelty of the methodology is that it allows for reliable simulations with inclusion of "realistic physics." It also allows for detailed analysis of the unsteady large scale flow evolution and compositional flame structure which is not usually possible via Reynolds averaged simulations.

  6. Multilevel Latent Class Analysis for Large-Scale Educational Assessment Data: Exploring the Relation between the Curriculum and Students' Mathematical Strategies

    ERIC Educational Resources Information Center

    Fagginger Auer, Marije F.; Hickendorff, Marian; Van Putten, Cornelis M.; Béguin, Anton A.; Heiser, Willem J.

    2016-01-01

    A first application of multilevel latent class analysis (MLCA) to educational large-scale assessment data is demonstrated. This statistical technique addresses several of the challenges that assessment data offers. Importantly, MLCA allows modeling of the often ignored teacher effects and of the joint influence of teacher and student variables.…

  7. Large-scale DNA Barcode Library Generation for Biomolecule Identification in High-throughput Screens.

    PubMed

    Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio

    2017-10-24

    High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.

  8. Subsurface Monitoring of CO2 Sequestration - A Review and Look Forward

    NASA Astrophysics Data System (ADS)

    Daley, T. M.

    2012-12-01

    The injection of CO2 into subsurface formations is at least 50 years old with large-scale utilization of CO2 for enhanced oil recovery (CO2-EOR) beginning in the 1970s. Early monitoring efforts had limited measurements in available boreholes. With growing interest in CO2 sequestration beginning in the 1990's, along with growth in geophysical reservoir monitoring, small to mid-size sequestration monitoring projects began to appear. The overall goals of a subsurface monitoring plan are to provide measurement of CO2 induced changes in subsurface properties at a range of spatial and temporal scales. The range of spatial scales allows tracking of the location and saturation of the plume with varying detail, while finer temporal sampling (up to continuous) allows better understanding of dynamic processes (e.g. multi-phase flow) and constraining of reservoir models. Early monitoring of small scale pilots associated with CO2-EOR (e.g., the McElroy field and the Lost Hills field), developed many of the methodologies including tomographic imaging and multi-physics measurements. Large (reservoir) scale sequestration monitoring began with the Sleipner and Weyburn projects. Typically, large scale monitoring, such as 4D surface seismic, has limited temporal sampling due to costs. Smaller scale pilots can allow more frequent measurements as either individual time-lapse 'snapshots' or as continuous monitoring. Pilot monitoring examples include the Frio, Nagaoka and Otway pilots using repeated well logging, crosswell imaging, vertical seismic profiles and CASSM (continuous active-source seismic monitoring). For saline reservoir sequestration projects, there is typically integration of characterization and monitoring, since the sites are not pre-characterized resource developments (oil or gas), which reinforces the need for multi-scale measurements. As we move beyond pilot sites, we need to quantify CO2 plume and reservoir properties (e.g. pressure) over large scales, while still obtaining high resolution. Typically the high-resolution (spatial and temporal) tools are deployed in permanent or semi-permanent borehole installations, where special well design may be necessary, such as non-conductive casing for electrical surveys. Effective utilization of monitoring wells requires an approach of modular borehole monitoring (MBM) were multiple measurements can be made. An example is recent work at the Citronelle pilot injection site where an MBM package with seismic, fluid sampling and distributed fiber sensing was deployed. For future large scale sequestration monitoring, an adaptive borehole-monitoring program is proposed.

  9. Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.

    PubMed

    Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G

    2017-04-07

    Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.

  10. Multi-color electron microscopy by element-guided identification of cells, organelles and molecules

    PubMed Central

    Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.

    2017-01-01

    Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351

  11. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  12. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  13. A Large-Scale Inquiry-Based Astronomy Intervention Project: Impact on Students' Content Knowledge Performance and Views of Their High School Science Classroom

    ERIC Educational Resources Information Center

    Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena; Deehan, James

    2016-01-01

    In this paper, we present the results from a study of the impact on students involved in a large-scale inquiry-based astronomical high school education intervention in Australia. Students in this intervention were led through an educational design allowing them to undertake an investigative approach to understanding the lifecycle of stars more…

  14. Basic numerical competences in large-scale assessment data: Structure and long-term relevance.

    PubMed

    Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian

    2018-03-01

    Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  16. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  17. On the Interactions Between Planetary and Mesoscale Dynamics in the Oceans

    NASA Astrophysics Data System (ADS)

    Grooms, I.; Julien, K. A.; Fox-Kemper, B.

    2011-12-01

    Multiple-scales asymptotic methods are used to investigate the interaction of planetary and mesoscale dynamics in the oceans. We find three regimes. In the first, the slow, large-scale planetary flow sets up a baroclinically unstable background which leads to vigorous mesoscale eddy generation, but the eddy dynamics do not affect the planetary dynamics. In the second, the planetary flow feels the effects of the eddies, but appears to be unable to generate them. The first two regimes rely on horizontally isotropic large-scale dynamics. In the third regime, large-scale anisotropy, as exists for example in the Antarctic Circumpolar Current and in western boundary currents, allows the large-scale dynamics to both generate and respond to mesoscale eddies. We also discuss how the investigation may be brought to bear on the problem of parameterization of unresolved mesoscale dynamics in ocean general circulation models.

  18. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    NASA Astrophysics Data System (ADS)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  19. Development of a gene synthesis platform for the efficient large scale production of small genes encoding animal toxins.

    PubMed

    Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A

    2016-12-01

    Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.

  20. Global Scale Solar Disturbances

    NASA Astrophysics Data System (ADS)

    Title, A. M.; Schrijver, C. J.; DeRosa, M. L.

    2013-12-01

    The combination of the STEREO and SDO missions have allowed for the first time imagery of the entire Sun. This coupled with the high cadence, broad thermal coverage, and the large dynamic range of the Atmospheric Imaging Assembly on SDO has allowed discovery of impulsive solar disturbances that can significantly affect a hemisphere or more of the solar volume. Such events are often, but not always, associated with M and X class flares. GOES C and even B class flares are also associated with these large scale disturbances. Key to the recognition of the large scale disturbances was the creation of log difference movies. By taking the log of images before differencing events in the corona become much more evident. Because such events cover such a large portion of the solar volume their passage can effect the dynamics of the entire corona as it adjusts to and recovers from their passage. In some cases this may lead to a another flare or filament ejection, but in general direct causal evidence of 'sympathetic' behavior is lacking. However, evidence is accumulating these large scale events create an environment that encourages other solar instabilities to occur. Understanding the source of these events and how the energy that drives them is built up, stored, and suddenly released is critical to understanding the origins of space weather. Example events and comments of their relevance will be presented.

  1. Perturbation theory for cosmologies with nonlinear structure

    NASA Astrophysics Data System (ADS)

    Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy

    2017-11-01

    The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.

  2. Climatological Factors Affecting Electromagnetic Surface Ducting in the Aegean Sea Region

    DTIC Science & Technology

    2012-03-01

    low precipitation, and northeasterly winds, all due to changes in large scale circulations and a northward shift in extratropical storm tracks. The...differences over the Aegean region, that are governed by large-scale climate factors. a. Winter During winter, the Aegean area is subject to extratropical ... extratropical cyclones from entering the Aegean region, while opposite shifts can 18 allow extratropical cyclones to more frequently enter the Aegean

  3. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  4. Sound production due to large-scale coherent structures. [and identification of noise mechanisms in turbulent shear flow

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The sound due to the large-scale (wavelike) structure in an infinite free turbulent shear flow is examined. Specifically, a computational study of a plane shear layer is presented, which accounts, by way of triple decomposition of the flow field variables, for three distinct component scales of motion (mean, wave, turbulent), and from which the sound - due to the large-scale wavelike structure - in the acoustic field can be isolated by a simple phase average. The computational approach has allowed for the identification of a specific noise production mechanism, viz the wave-induced stress, and has indicated the effect of coherent structure amplitude and growth and decay characteristics on noise levels produced in the acoustic far field.

  5. Teaching Real Science with a Microcomputer.

    ERIC Educational Resources Information Center

    Naiman, Adeline

    1983-01-01

    Discusses various ways science can be taught using microcomputers, including simulations/games which allow large-scale or historic experiments to be replicated on a manageable scale in a brief time. Examples of several computer programs are also presented, including "Experiments in Human Physiology,""Health Awareness…

  6. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  7. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  8. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; hide

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  9. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Astrophysics Data System (ADS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dünner, R.; Essinger-Hileman, T.; Eimer, J.; Fluxa, P.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G.; Hubmayr, J.; Iuliano, J.; Marriage, T. A.; Miller, N.; Moseley, S. H.; Mumby, G.; Petroff, M.; Reintsema, C.; Rostem, K.; U-Yen, K.; Watts, D.; Wagner, E.; Wollack, E. J.; Xu, Z.; Zeng, L.

    2016-08-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe ˜ 70 % of the sky. A variable-delay polarization modulator provides modulation of the polarization at ˜ 10 Hz to suppress the 1/ f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  10. Regional climate model sensitivity to domain size

    NASA Astrophysics Data System (ADS)

    Leduc, Martin; Laprise, René

    2009-05-01

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.

  11. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  12. Multi-scale Modeling of Arctic Clouds

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  13. Large-scale, high-density (up to 512 channels) recording of local circuits in behaving animals

    PubMed Central

    Berényi, Antal; Somogyvári, Zoltán; Nagy, Anett J.; Roux, Lisa; Long, John D.; Fujisawa, Shigeyoshi; Stark, Eran; Leonardo, Anthony; Harris, Timothy D.

    2013-01-01

    Monitoring representative fractions of neurons from multiple brain circuits in behaving animals is necessary for understanding neuronal computation. Here, we describe a system that allows high-channel-count recordings from a small volume of neuronal tissue using a lightweight signal multiplexing headstage that permits free behavior of small rodents. The system integrates multishank, high-density recording silicon probes, ultraflexible interconnects, and a miniaturized microdrive. These improvements allowed for simultaneous recordings of local field potentials and unit activity from hundreds of sites without confining free movements of the animal. The advantages of large-scale recordings are illustrated by determining the electroanatomic boundaries of layers and regions in the hippocampus and neocortex and constructing a circuit diagram of functional connections among neurons in real anatomic space. These methods will allow the investigation of circuit operations and behavior-dependent interregional interactions for testing hypotheses of neural networks and brain function. PMID:24353300

  14. Analysis of Large-Scale Resurfacing Processes on Mercury: Mapping the Derain (H-10) Quadrangle

    NASA Astrophysics Data System (ADS)

    Whitten, J. L.; Ostrach, L. R.; Fassett, C. I.

    2018-05-01

    The Derain (H-10) Quadrangle of Mercury contains a large region of "average" crustal materials, with minimal smooth plains and basin ejecta, allowing the relative contribution of volcanic and impact processes to be assessed through geologic mapping.

  15. Architectural Visualization of C/C++ Source Code for Program Comprehension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panas, T; Epperly, T W; Quinlan, D

    2006-09-01

    Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less

  16. Automated radiosynthesis of Al[18F]PSMA-11 for large scale routine use.

    PubMed

    Kersemans, Ken; De Man, Kathia; Courtyn, Jan; Van Royen, Tessa; Piron, Sarah; Moerman, Lieselotte; Brans, Boudewijn; De Vos, Filip

    2018-05-01

    We report a reproducible automated radiosynthesis for large scale batch production of clinical grade Al[ 18 F]PSMA-11. A SynthraFCHOL module was optimized to synthesize Al[ 18 F]PSMA-11 by Al[ 18 F]-chelation. Results Al[ 18 F]PSMA-11 was synthesized within 35min in a yield of 21 ± 3% (24.0 ± 6.0GBq) and a radiochemical purity > 95%. Batches were stable for 4h and conform the European Pharmacopeia guidelines. The automated synthesis of Al[ 18 F]PSMA-11 allows for large scale production and distribution of Al[ 18 F]PSMA-11. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Measuring the Large-scale Solar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.

    2017-12-01

    The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.

  18. The physics behind the larger scale organization of DNA in eukaryotes.

    PubMed

    Emanuel, Marc; Radja, Nima Hamedani; Henriksson, Andreas; Schiessel, Helmut

    2009-07-01

    In this paper, we discuss in detail the organization of chromatin during a cell cycle at several levels. We show that current experimental data on large-scale chromatin organization have not yet reached the level of precision to allow for detailed modeling. We speculate in some detail about the possible physics underlying the larger scale chromatin organization.

  19. When micro meets macro: microbial lipid analysis and ecosystem ecology

    NASA Astrophysics Data System (ADS)

    Balser, T.; Gutknecht, J.

    2008-12-01

    There is growing interest in linking soil microbial community composition and activity with large-scale field studies of nutrient cycling or plant community response to disturbances. And while analysis of microbial communities has moved rapidly in the past decade from culture-based to non-culture based techniques, still it must be asked what have we gained from the move? How well does the necessarily micro-scale of microbial analysis allow us to address questions of interest at the macro-scale? Several challenges exist in bridging the scales, and foremost is the question of methodological feasibility. Past microbiological methodologies have not been readily adaptable to the large sample sizes necessary for ecosystem-scale research. As a result, it has been difficult to generate compatible microbial and ecosystem data sets. We describe the use of a modified lipid extraction method to generate microbial community data sets that allow us to match landscape-scale or long-term ecological studies with microbial community data. We briefly discuss the challenges and advantages associated with lipid analysis as an approach to addressing ecosystem ecological studies, and provide examples from our research in ecosystem restoration and recovery following disturbance and climate change.

  20. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  1. New Fund Allows Colleges to Pool Resources for Large-Scale Real-Estate Investments.

    ERIC Educational Resources Information Center

    McMillen, Liz

    1988-01-01

    The Real Estate Investment Trust, a companion organization to the Common Trust, allows colleges to commit as little as $50,000 for investments in commercial properties at minimum risk, which could protect endowments while providing returns comparable to those of the stock market. (MSE)

  2. Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.

    PubMed

    Baars, W J; Hutchins, N; Marusic, I

    2017-03-13

    Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  3. Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Crockett, Thomas W.

    1999-01-01

    This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.

  4. Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat.

    PubMed

    Schjetnan, Andrea Gomez Palacio; Luczak, Artur

    2011-10-19

    Large scale electrophysiological recordings from neuronal ensembles offer the opportunity to investigate how the brain orchestrates the wide variety of behaviors from the spiking activity of its neurons. One of the most effective methods to monitor spiking activity from a large number of neurons in multiple local neuronal circuits simultaneously is by using silicon electrode arrays. Action potentials produce large transmembrane voltage changes in the vicinity of cell somata. These output signals can be measured by placing a conductor in close proximity of a neuron. If there are many active (spiking) neurons in the vicinity of the tip, the electrode records combined signal from all of them, where contribution of a single neuron is weighted by its 'electrical distance'. Silicon probes are ideal recording electrodes to monitor multiple neurons because of a large number of recording sites (+64) and a small volume. Furthermore, multiple sites can be arranged over a distance of millimeters, thus allowing for the simultaneous recordings of neuronal activity in the various cortical layers or in multiple cortical columns (Fig. 1). Importantly, the geometrically precise distribution of the recording sites also allows for the determination of the spatial relationship of the isolated single neurons. Here, we describe an acute, large-scale neuronal recording from the left and right forelimb somatosensory cortex simultaneously in an anesthetized rat with silicon probes (Fig. 2).

  5. Recording Large-scale Neuronal Ensembles with Silicon Probes in the Anesthetized Rat

    PubMed Central

    Schjetnan, Andrea Gomez Palacio; Luczak, Artur

    2011-01-01

    Large scale electrophysiological recordings from neuronal ensembles offer the opportunity to investigate how the brain orchestrates the wide variety of behaviors from the spiking activity of its neurons. One of the most effective methods to monitor spiking activity from a large number of neurons in multiple local neuronal circuits simultaneously is by using silicon electrode arrays1-3. Action potentials produce large transmembrane voltage changes in the vicinity of cell somata. These output signals can be measured by placing a conductor in close proximity of a neuron. If there are many active (spiking) neurons in the vicinity of the tip, the electrode records combined signal from all of them, where contribution of a single neuron is weighted by its 'electrical distance'. Silicon probes are ideal recording electrodes to monitor multiple neurons because of a large number of recording sites (+64) and a small volume. Furthermore, multiple sites can be arranged over a distance of millimeters, thus allowing for the simultaneous recordings of neuronal activity in the various cortical layers or in multiple cortical columns (Fig. 1). Importantly, the geometrically precise distribution of the recording sites also allows for the determination of the spatial relationship of the isolated single neurons4. Here, we describe an acute, large-scale neuronal recording from the left and right forelimb somatosensory cortex simultaneously in an anesthetized rat with silicon probes (Fig. 2). PMID:22042361

  6. Seemingly unrelated intervention time series models for effectiveness evaluation of large scale environmental remediation.

    PubMed

    Ip, Ryan H L; Li, W K; Leung, Kenneth M Y

    2013-09-15

    Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  8. The interaction between active normal faulting and large scale gravitational mass movements revealed by paleoseismological techniques: A case study from central Italy

    NASA Astrophysics Data System (ADS)

    Moro, M.; Saroli, M.; Gori, S.; Falcucci, E.; Galadini, F.; Messina, P.

    2012-05-01

    Paleoseismological techniques have been applied to characterize the kinematic behaviour of large-scale gravitational phenomena located in proximity of the seismogenic fault responsible for the Mw 7.0, 1915 Avezzano earthquake and to identify evidence of a possible coseismic reactivation. The above mentioned techniques were applied to the surface expression of the main sliding planes of the Mt. Serrone gravitational deformation, located in the southeastern border of the Fucino basin (central Italy). The approach allows us to detect instantaneous events of deformation along the uphill-facing scarp. These events are testified by the presence of faulted deposits and colluvial wedges. The identified and chronologically-constrained episodes of rapid displacement can be probably correlated with seismic events determined by the activation of the Fucino seismogenic fault, affecting the toe of the gravitationally unstable rock mass. Indeed this fault can produce strong, short-term dynamic stresses able to trigger the release of local gravitational stress accumulated by Mt. Serrone's large-scale gravitational phenomena. The applied methodology could allow us to better understand the geometric and kinematic relationships between active tectonic structures and large-scale gravitational phenomena. It would be more important in seismically active regions, since deep-seated gravitational slope deformations can evolve into a catastrophic collapse and can strongly increase the level of earthquake-induced hazards.

  9. Multilevel Hierarchical Kernel Spectral Clustering for Real-Life Large Scale Complex Networks

    PubMed Central

    Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.

    2014-01-01

    Kernel spectral clustering corresponds to a weighted kernel principal component analysis problem in a constrained optimization framework. The primal formulation leads to an eigen-decomposition of a centered Laplacian matrix at the dual level. The dual formulation allows to build a model on a representative subgraph of the large scale network in the training phase and the model parameters are estimated in the validation stage. The KSC model has a powerful out-of-sample extension property which allows cluster affiliation for the unseen nodes of the big data network. In this paper we exploit the structure of the projections in the eigenspace during the validation stage to automatically determine a set of increasing distance thresholds. We use these distance thresholds in the test phase to obtain multiple levels of hierarchy for the large scale network. The hierarchical structure in the network is determined in a bottom-up fashion. We empirically showcase that real-world networks have multilevel hierarchical organization which cannot be detected efficiently by several state-of-the-art large scale hierarchical community detection techniques like the Louvain, OSLOM and Infomap methods. We show that a major advantage of our proposed approach is the ability to locate good quality clusters at both the finer and coarser levels of hierarchy using internal cluster quality metrics on 7 real-life networks. PMID:24949877

  10. Future of applied watershed science at regional scales

    Treesearch

    Lee Benda; Daniel Miller; Steve Lanigan; Gordon Reeves

    2009-01-01

    Resource managers must deal increasingly with land use and conservation plans applied at large spatial scales (watersheds, landscapes, states, regions) involving multiple interacting federal agencies and stakeholders. Access to a geographically focused and application-oriented database would allow users in different locations and with different concerns to quickly...

  11. "Fan-Tip-Drive" High-Power-Density, Permanent Magnet Electric Motor and Test Rig Designed for a Nonpolluting Aircraft Propulsion Program

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Kascak, Albert F.

    2004-01-01

    A scaled blade-tip-drive test rig was designed at the NASA Glenn Research Center. The rig is a scaled version of a direct-current brushless motor that would be located in the shroud of a thrust fan. This geometry is very attractive since the allowable speed of the armature is approximately the speed of the blade tips (Mach 1 or 1100 ft/s). The magnetic pressure generated in the motor acts over a large area and, thus, produces a large force or torque. This large force multiplied by the large velocity results in a high-power-density motor.

  12. Phase relations in a forced turbulent boundary layer: implications for modelling of high Reynolds number wall turbulence

    PubMed Central

    2017-01-01

    Phase relations between specific scales in a turbulent boundary layer are studied here by highlighting the associated nonlinear scale interactions in the flow. This is achieved through an experimental technique that allows for targeted forcing of the flow through the use of a dynamic wall perturbation. Two distinct large-scale modes with well-defined spatial and temporal wavenumbers were simultaneously forced in the boundary layer, and the resulting nonlinear response from their direct interactions was isolated from the turbulence signal for the study. This approach advances the traditional studies of large- and small-scale interactions in wall turbulence by focusing on the direct interactions between scales with triadic wavenumber consistency. The results are discussed in the context of modelling high Reynolds number wall turbulence. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167576

  13. A comprehensive surface-groundwater flow model

    NASA Astrophysics Data System (ADS)

    Arnold, Jeffrey G.; Allen, Peter M.; Bernhardt, Gilbert

    1993-02-01

    In this study, a simple groundwater flow and height model was added to an existing basin-scale surface water model. The linked model is: (1) watershed scale, allowing the basin to be subdivided; (2) designed to accept readily available inputs to allow general use over large regions; (3) continuous in time to allow simulation of land management, including such factors as climate and vegetation changes, pond and reservoir management, groundwater withdrawals, and stream and reservoir withdrawals. The model is described, and is validated on a 471 km 2 watershed near Waco, Texas. This linked model should provide a comprehensive tool for water resource managers in development and planning.

  14. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 3; Disaggregation

    NASA Technical Reports Server (NTRS)

    Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius

    1998-01-01

    This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.

  15. The cosmological principle is not in the sky

    NASA Astrophysics Data System (ADS)

    Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan

    2017-08-01

    The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.

  16. Does lower Omega allow a resolution of the large-scale structure problem?

    NASA Technical Reports Server (NTRS)

    Silk, Joseph; Vittorio, Nicola

    1987-01-01

    The intermediate angular scale anisotropy of the cosmic microwave background, peculiar velocities, density correlations, and mass fluctuations for both neutrino and baryon-dominated universes with Omega less than one are evaluated. The large coherence length associated with a low-Omega, hot dark matter-dominated universe provides substantial density fluctuations on scales up to 100 Mpc: there is a range of acceptable models that are capable of producing large voids and superclusters of galaxies and the clustering of galaxy clusters, with Omega roughly 0.3, without violating any observational constraint. Low-Omega, cold dark matter-dominated cosmologies are also examined. All of these models may be reconciled with the inflationary requirement of a flat universe by introducing a cosmological constant 1-Omega.

  17. Large scale 20mm photography for range resources analysis in the Western United States. [Casa Grande, Arizona, Mercury, Nevada, and Mojave Desert

    NASA Technical Reports Server (NTRS)

    Tueller, P. T.

    1977-01-01

    Large scale 70mm aerial photography is a valuable supplementary tool for rangeland studies. A wide assortment of applications were developed varying from vegetation mapping to assessing environmental impact on rangelands. Color and color infrared stereo pairs are useful for effectively sampling sites limited by ground accessibility. They allow an increased sample size at similar or lower cost than ground sampling techniques and provide a permanent record.

  18. Large-Scale Land Acquisitions in Sub-Saharan Africa: The Intersection of American Strategic Interests, Economics, Security, and Politics

    DTIC Science & Technology

    2012-05-01

    pressures on supply that led to the global food crisis of 2007 and 2008, allowing prices to fall from their peak in August 2008, the foundational...involved in the acquisition of farmland.9 This trend is also unlikely to slow, with food prices continuing to climb, surpassing the highs of 2007 and...and general secrecy in most large-scale land acquisition contracts, exact data regarding the number of deals and amount of land transferred are

  19. Flow chemistry kinetic studies reveal reaction conditions for ready access to unsymmetrical trehalose analogues.

    PubMed

    Patel, Mitul K; Davis, Benjamin G

    2010-10-07

    Monofunctionalization of trehalose, a widely-found symmetric plant disaccharide, was studied in a microreactor to give valuable kinetic insights that have allowed improvements in desymmetrization yields and the development of a reaction sequence for large scale monofunctionalizations that allow access to probes of trehalose's biological function.

  20. Evaluating 20th Century precipitation characteristics between multi-scale atmospheric models with different land-atmosphere coupling

    NASA Astrophysics Data System (ADS)

    Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.

    2016-12-01

    Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.

  1. Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.

    PubMed

    Li, Min; Zhang, John Zenghui; Xia, Fei

    2016-04-12

    Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.

  2. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  3. Statistical analysis of the time and space characteristic scales for large precipitating systems in the equatorial, tropical, sahelian and mid-latitude regions.

    NASA Astrophysics Data System (ADS)

    Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya

    2017-04-01

    Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.

  4. Bridging the scales in a eulerian air quality model to assess megacity export of pollution

    NASA Astrophysics Data System (ADS)

    Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.

    2013-08-01

    In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.

  5. STE thrust chamber technology: Main injector technology program and nozzle Advanced Development Program (ADP)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The purpose of the STME Main Injector Program was to enhance the technology base for the large-scale main injector-combustor system of oxygen-hydrogen booster engines in the areas of combustion efficiency, chamber heating rates, and combustion stability. The initial task of the Main Injector Program, focused on analysis and theoretical predictions using existing models, was complemented by the design, fabrication, and test at MSFC of a subscale calorimetric, 40,000-pound thrust class, axisymmetric thrust chamber operating at approximately 2,250 psi and a 7:1 expansion ratio. Test results were used to further define combustion stability bounds, combustion efficiency, and heating rates using a large injector scale similar to the Pratt & Whitney (P&W) STME main injector design configuration including the tangential entry swirl coaxial injection elements. The subscale combustion data was used to verify and refine analytical modeling simulation and extend the database range to guide the design of the large-scale system main injector. The subscale injector design incorporated fuel and oxidizer flow area control features which could be varied; this allowed testing of several design points so that the STME conditions could be bracketed. The subscale injector design also incorporated high-reliability and low-cost fabrication techniques such as a one-piece electrical discharged machined (EDMed) interpropellant plate. Both subscale and large-scale injectors incorporated outer row injector elements with scarfed tip features to allow evaluation of reduced heating rates to the combustion chamber.

  6. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    PubMed

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  7. Dynamical links between small- and large-scale mantle heterogeneity: Seismological evidence

    NASA Astrophysics Data System (ADS)

    Frost, Daniel A.; Garnero, Edward J.; Rost, Sebastian

    2018-01-01

    We identify PKP • PKP scattered waves (also known as P‧ •P‧) from earthquakes recorded at small-aperture seismic arrays at distances less than 65°. P‧ •P‧ energy travels as a PKP wave through the core, up into the mantle, then scatters back down through the core to the receiver as a second PKP. P‧ •P‧ waves are unique in that they allow scattering heterogeneities throughout the mantle to be imaged. We use array-processing methods to amplify low amplitude, coherent scattered energy signals and resolve their incoming direction. We deterministically map scattering heterogeneity locations from the core-mantle boundary to the surface. We use an extensive dataset with sensitivity to a large volume of the mantle and a location method allowing us to resolve and map more heterogeneities than have previously been possible, representing a significant increase in our understanding of small-scale structure within the mantle. Our results demonstrate that the distribution of scattering heterogeneities varies both radially and laterally. Scattering is most abundant in the uppermost and lowermost mantle, and a minimum in the mid-mantle, resembling the radial distribution of tomographically derived whole-mantle velocity heterogeneity. We investigate the spatial correlation of scattering heterogeneities with large-scale tomographic velocities, lateral velocity gradients, the locations of deep-seated hotspots and subducted slabs. In the lowermost 1500 km of the mantle, small-scale heterogeneities correlate with regions of low seismic velocity, high lateral seismic gradient, and proximity to hotspots. In the upper 1000 km of the mantle there is no significant correlation between scattering heterogeneity location and subducted slabs. Between 600 and 900 km depth, scattering heterogeneities are more common in the regions most remote from slabs, and close to hotspots. Scattering heterogeneities show an affinity for regions close to slabs within the upper 200 km of the mantle. The similarity between the distribution of large-scale and small-scale mantle structures suggests a dynamic connection across scales, whereby mantle heterogeneities of all sizes may be directed in similar ways by large-scale convective currents.

  8. Toward exascale production of recombinant adeno-associated virus for gene transfer applications.

    PubMed

    Cecchini, S; Negrete, A; Kotin, R M

    2008-06-01

    To gain acceptance as a medical treatment, adeno-associated virus (AAV) vectors require a scalable and economical production method. Recent developments indicate that recombinant AAV (rAAV) production in insect cells is compatible with current good manufacturing practice production on an industrial scale. This platform can fully support development of rAAV therapeutics from tissue culture to small animal models, to large animal models, to toxicology studies, to Phase I clinical trials and beyond. Efforts to characterize, optimize and develop insect cell-based rAAV production have culminated in successful bioreactor-scale production of rAAV, with total yields potentially capable of approaching the exa-(10(18)) scale. These advances in large-scale AAV production will allow us to address specific catastrophic, intractable human diseases such as Duchenne muscular dystrophy, for which large amounts of recombinant vector are essential for successful outcome.

  9. Honeycomb: Visual Analysis of Large Scale Social Networks

    NASA Astrophysics Data System (ADS)

    van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.

    The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.

  10. Simulation of coherent nonlinear neutrino flavor transformation in the supernova environment: Correlated neutrino trajectories

    NASA Astrophysics Data System (ADS)

    Duan, Huaiyu; Fuller, George M.; Carlson, J.; Qian, Yong-Zhong

    2006-11-01

    We present results of large-scale numerical simulations of the evolution of neutrino and antineutrino flavors in the region above the late-time post-supernova-explosion proto-neutron star. Our calculations are the first to allow explicit flavor evolution histories on different neutrino trajectories and to self-consistently couple flavor development on these trajectories through forward scattering-induced quantum coupling. Employing the atmospheric-scale neutrino mass-squared difference (|δm2|≃3×10-3eV2) and values of θ13 allowed by current bounds, we find transformation of neutrino and antineutrino flavors over broad ranges of energy and luminosity in roughly the “bi-polar” collective mode. We find that this large-scale flavor conversion, largely driven by the flavor off-diagonal neutrino-neutrino forward scattering potential, sets in much closer to the proto-neutron star than simple estimates based on flavor-diagonal potentials and Mikheyev-Smirnov-Wolfenstein evolution would indicate. In turn, this suggests that models of r-process nucleosynthesis sited in the neutrino-driven wind could be affected substantially by active-active neutrino flavor mixing, even with the small measured neutrino mass-squared differences.

  11. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  12. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  13. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    NASA Astrophysics Data System (ADS)

    Huang, Dong; Liu, Yangang

    2014-12-01

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.

  14. Impact of Accumulated Error on Item Response Theory Pre-Equating with Mixed Format Tests

    ERIC Educational Resources Information Center

    Keller, Lisa A.; Keller, Robert; Cook, Robert J.; Colvin, Kimberly F.

    2016-01-01

    The equating of tests is an essential process in high-stakes, large-scale testing conducted over multiple forms or administrations. By adjusting for differences in difficulty and placing scores from different administrations of a test on a common scale, equating allows scores from these different forms and administrations to be directly compared…

  15. Original BPC3 Research Plan

    Cancer.gov

    The Breast and Prostate Cancer and Hormone-Related Gene Variant Study allows large-scale analyses of breast and prostate cancer risk in relation to genetic polymorphisms and gene-environment interactions that affect hormone metabolism.

  16. Large-scale velocities and primordial non-Gaussianity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Fabian

    2010-09-15

    We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less

  17. Cold dark matter confronts the cosmic microwave background - Large-angular-scale anisotropies in Omega sub 0 + lambda 1 models

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola

    1992-01-01

    A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.

  18. SLIDE - a web-based tool for interactive visualization of large-scale -omics data.

    PubMed

    Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon

    2018-06-28

    Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.

  19. Updates to WFC3/UVIS Filter-Dependent and Filter-Distinct Distortion Corrections

    NASA Astrophysics Data System (ADS)

    Martlin, Catherine; Kozhurina-Platais, Vera; McKay, Myles; Sabbi, Elena

    2018-06-01

    The WFC3/UVIS filter wheel contains 63 filters that cover a large range of wavelengths from near ultraviolet to the near infrared. Previously, analysis was completed on the 14 most used UVIS filters to calibrate geometric distortions. These distortions are due to a combination of the optical assembly of HST as well as the variabilities in the composition of individual filters. We report recent updates to reference files that aid in correcting for these distortions of an additional 22 UVIS narrow and medium band filters and 4 unique UVIS filters. They were created following a calibration of the large-scale optical distortions and fine-scale filter-dependent distortions. Furthermore, we present results on a study into a selection of unique polynomial coefficient terms from all solved filters which allows us to better investigate the filter-dependent patterns across a large range of wavelengths.These updates will provide important enhancements for HST/WFC3 users as they allow more accurate alignment of images across the range of UVIS filters.

  20. Phase relations in a forced turbulent boundary layer: implications for modelling of high Reynolds number wall turbulence.

    PubMed

    Duvvuri, Subrahmanyam; McKeon, Beverley

    2017-03-13

    Phase relations between specific scales in a turbulent boundary layer are studied here by highlighting the associated nonlinear scale interactions in the flow. This is achieved through an experimental technique that allows for targeted forcing of the flow through the use of a dynamic wall perturbation. Two distinct large-scale modes with well-defined spatial and temporal wavenumbers were simultaneously forced in the boundary layer, and the resulting nonlinear response from their direct interactions was isolated from the turbulence signal for the study. This approach advances the traditional studies of large- and small-scale interactions in wall turbulence by focusing on the direct interactions between scales with triadic wavenumber consistency. The results are discussed in the context of modelling high Reynolds number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  1. A GLOBAL GALACTIC DYNAMO WITH A CORONA CONSTRAINED BY RELATIVE HELICITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, A.; Mangalam, A., E-mail: avijeet@iiap.res.in, E-mail: mangalam@iiap.res.in

    We present a model for a global axisymmetric turbulent dynamo operating in a galaxy with a corona that treats the parameters of turbulence driven by supernovae and by magneto-rotational instability under a common formalism. The nonlinear quenching of the dynamo is alleviated by the inclusion of small-scale advective and diffusive magnetic helicity fluxes, which allow the gauge-invariant magnetic helicity to be transferred outside the disk and consequently to build up a corona during the course of dynamo action. The time-dependent dynamo equations are expressed in a separable form and solved through an eigenvector expansion constructed using the steady-state solutions ofmore » the dynamo equation. The parametric evolution of the dynamo solution allows us to estimate the final structure of the global magnetic field and the saturated value of the turbulence parameter α{sub m}, even before solving the dynamical equations for evolution of magnetic fields in the disk and the corona, along with α-quenching. We then solve these equations simultaneously to study the saturation of the large-scale magnetic field, its dependence on the small-scale magnetic helicity fluxes, and the corresponding evolution of the force-free field in the corona. The quadrupolar large-scale magnetic field in the disk is found to reach equipartition strength within a timescale of 1 Gyr. The large-scale magnetic field in the corona obtained is much weaker than the field inside the disk and has only a weak impact on the dynamo operation.« less

  2. The ellipsoidal universe in the Planck satellite era

    NASA Astrophysics Data System (ADS)

    Cea, Paolo

    2014-06-01

    Recent Planck data confirm that the cosmic microwave background displays the quadrupole power suppression together with large-scale anomalies. Progressing from previous results, that focused on the quadrupole anomaly, we strengthen the proposal that the slightly anisotropic ellipsoidal universe may account for these anomalies. We solved at large scales the Boltzmann equation for the photon distribution functions by taking into account both the effects of the inflation produced primordial scalar perturbations and the anisotropy of the geometry in the ellipsoidal universe. We showed that the low quadrupole temperature correlations allowed us to fix the eccentricity at decoupling, edec = (0.86 ± 0.14) 10-2, and to constraint the direction of the symmetry axis. We found that the anisotropy of the geometry of the universe contributes only to the large-scale temperature anisotropies without affecting the higher multipoles of the angular power spectrum. Moreover, we showed that the ellipsoidal geometry of the universe induces sizeable polarization signal at large scales without invoking the reionization scenario. We explicitly evaluated the quadrupole TE and EE correlations. We found an average large-scale polarization ΔTpol = (1.20 ± 0.38) μK. We point out that great care is needed in the experimental determination of the large-scale polarization correlations since the average temperature polarization could be misinterpreted as foreground emission leading, thereby, to a considerable underestimate of the cosmic microwave background polarization signal.

  3. Oblique Wing Flights

    NASA Image and Video Library

    2018-05-09

    Flown in the mid 70's, this Oblique Wing was a large-scale R/C experimental aircraft to demonstrate the ability to pivot its wing to an oblique angle, allowing for a reduced drag penalty at transonic speeds.

  4. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    PubMed Central

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2015-01-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541

  5. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    NASA Astrophysics Data System (ADS)

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.

  6. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-11-01

    Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  7. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  8. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  9. High-z objects and cold dark matter cosmogonies - Constraints on the primordial power spectrum on small scales

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1993-01-01

    Modified cold dark matter (CDM) models were recently suggested to account for large-scale optical data, which fix the power spectrum on large scales, and the COBE results, which would then fix the bias parameter, b. We point out that all such models have deficit of small-scale power where density fluctuations are presently nonlinear, and should then lead to late epochs of collapse of scales M between 10 exp 9 - 10 exp 10 solar masses and (1-5) x 10 exp 14 solar masses. We compute the probabilities and comoving space densities of various scale objects at high redshifts according to the CDM models and compare these with observations of high-z QSOs, high-z galaxies and the protocluster-size object found recently by Uson et al. (1992) at z = 3.4. We show that the modified CDM models are inconsistent with the observational data on these objects. We thus suggest that in order to account for the high-z objects, as well as the large-scale and COBE data, one needs a power spectrum with more power on small scales than CDM models allow and an open universe.

  10. Mechanisms Controlling the Interannual Variation of Mixed Layer Temperature Averaged over the Nino-3 Region

    NASA Technical Reports Server (NTRS)

    Kim, Seung-Bum; Lee, Tong; Fukumori, Ichiro

    2007-01-01

    The present study examines processes governing the interannual variation of MLT in the eastern equatorial Pacific.Processes controlling the interannual variation of mixed layer temperature (MLT) averaged over the Nino-3 domain (5 deg N-5 deg S, 150 deg-90 deg W) are studied using an ocean data assimilation product that covers the period of 1993-2003. The overall balance is such that surface heat flux opposes the MLT change but horizontal advection and subsurface processes assist the change. Advective tendencies are estimated here as the temperature fluxes through the domain's boundaries, with the boundary temperature referenced to the domain-averaged temperature to remove the dependence on temperature scale. This allows the authors to characterize external advective processes that warm or cool the water within the domain as a whole. The zonal advective tendency is caused primarily by large-scale advection of warm-pool water through the western boundary of the domain. The meridional advective tendency is contributed to mostly by Ekman current advecting large-scale temperature anomalies through the southern boundary of the domain. Unlike many previous studies, the subsurface processes that consist of vertical mixing and entrainment are explicitly evaluated. In particular, a rigorous method to estimate entrainment allows an exact budget closure. The vertical mixing across the mixed layer (ML) base has a contribution in phase with the MLT change. The entrainment tendency due to the temporal change in ML depth is negligible compared to other subsurface processes. The entrainment tendency by vertical advection across the ML base is dominated by large-scale changes in upwelling and the temperature of upwelling water. Tropical instability waves (TIWs) result in smaller-scale vertical advection that warms the domain during La Nina cooling events. However, such a warming tendency is overwhelmed by the cooling tendency associated with the large-scale upwelling by a factor of 2. In summary, all the balance terms are important in the MLT budget except the entrainment due to lateral induction and temporal variation in ML depth. All three advective tendencies are primarily caused by large-scale and low-frequency processes, and they assist the Nino-3 MLT change.

  11. Food waste impact on municipal solid waste angle of internal friction.

    PubMed

    Cho, Young Min; Ko, Jae Hac; Chi, Liqun; Townsend, Timothy G

    2011-01-01

    The impact of food waste content on the municipal solid waste (MSW) friction angle was studied. Using reconstituted fresh MSW specimens with different food waste content (0%, 40%, 58%, and 80%), 48 small-scale (100-mm-diameter) direct shear tests and 12 large-scale (430 mm × 430 mm) direct shear tests were performed. A stress-controlled large-scale direct shear test device allowing approximately 170-mm sample horizontal displacement was designed and used. At both testing scales, the mobilized internal friction angle of MSW decreased considerably as food waste content increased. As food waste content increased from 0% to 40% and from 40% to 80%, the mobilized internal friction angles (estimated using the mobilized peak (ultimate) shear strengths of the small-scale direct shear tests) decreased from 39° to 31° and from 31° to 7°, respectively, while those of large-scale tests decreased from 36° to 26° and from 26° to 15°, respectively. Most friction angle measurements produced in this study fell within the range of those previously reported for MSW. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    PubMed

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  13. Multiresource inventories incorporating GIS, GPS, and database management systems

    Treesearch

    Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du

    2000-01-01

    Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...

  14. Automation of large scale transient protein expression in mammalian cells

    PubMed Central

    Zhao, Yuguang; Bishop, Benjamin; Clay, Jordan E.; Lu, Weixian; Jones, Margaret; Daenke, Susan; Siebold, Christian; Stuart, David I.; Yvonne Jones, E.; Radu Aricescu, A.

    2011-01-01

    Traditional mammalian expression systems rely on the time-consuming generation of stable cell lines; this is difficult to accommodate within a modern structural biology pipeline. Transient transfections are a fast, cost-effective solution, but require skilled cell culture scientists, making man-power a limiting factor in a setting where numerous samples are processed in parallel. Here we report a strategy employing a customised CompacT SelecT cell culture robot allowing the large-scale expression of multiple protein constructs in a transient format. Successful protocols have been designed for automated transient transfection of human embryonic kidney (HEK) 293T and 293S GnTI− cells in various flask formats. Protein yields obtained by this method were similar to those produced manually, with the added benefit of reproducibility, regardless of user. Automation of cell maintenance and transient transfection allows the expression of high quality recombinant protein in a completely sterile environment with limited support from a cell culture scientist. The reduction in human input has the added benefit of enabling continuous cell maintenance and protein production, features of particular importance to structural biology laboratories, which typically use large quantities of pure recombinant proteins, and often require rapid characterisation of a series of modified constructs. This automated method for large scale transient transfection is now offered as a Europe-wide service via the P-cube initiative. PMID:21571074

  15. Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques

    DOT National Transportation Integrated Search

    1995-01-01

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...

  16. Preliminary simulations of the large-scale environment during the FIRE cirrus IFO

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Toon, Owen B.

    1990-01-01

    Large scale forcing (scales greater than 500 km) is the dominant factor in the generation, maintenance, and dissipation of cirrus cloud systems. However, the analyses of data acquired during the first Cirrus IFO have highlighted the importance of mesoscale processes (scales of 20 to 500 km) to the development of cirrus cloud systems. Unfortunately, Starr and Wylie found that the temporal and spatial resolution of the standard and supplemental rawinsonde data were insufficient to allow an explanation of all of the mesoscale cloud features that were present on the 27 to 28 Oct. 1986. It is described how dynamic initialization, or 4-D data assimilation (FDDA) can provide a method to address this problem. The first steps towards application of FDDA to FIRE are also described.

  17. A novel approach for introducing cloud spatial structure into cloud radiative transfer parameterizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Liu, Yangang

    2014-12-18

    Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less

  18. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  19. Ultrafast carrier dynamics in the large-magnetoresistance material WTe 2

    DOE PAGES

    Dai, Y. M.; Bowlan, J.; Li, H.; ...

    2015-10-07

    In this study, ultrafast optical pump-probe spectroscopy is used to track carrier dynamics in the large-magnetoresistance material WTe 2. Our experiments reveal a fast relaxation process occurring on a subpicosecond time scale that is caused by electron-phonon thermalization, allowing us to extract the electron-phonon coupling constant. An additional slower relaxation process, occurring on a time scale of ~5–15 ps, is attributed to phonon-assisted electron-hole recombination. As the temperature decreases from 300 K, the time scale governing this process increases due to the reduction of the phonon population. However, below ~50 K, an unusual decrease of the recombination time sets in,more » most likely due to a change in the electronic structure that has been linked to the large magnetoresistance observed in this material.« less

  20. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    NASA Astrophysics Data System (ADS)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  1. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  2. Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities (Book)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2013-03-01

    To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. The U.S. Department of Energy's Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessary private capital to complete them. This guide is intended to provide a general resource that will begin to develop the Federal employee's awareness and understanding of the project developer's operating environment and the private sector's awareness and understandingmore » of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this guide has been organized to match Federal processes with typical phases of commercial project development. The main purpose of this guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project.« less

  3. Breast and Prostate Cancer and Hormone-Related Gene Variant Study

    Cancer.gov

    The Breast and Prostate Cancer and Hormone-Related Gene Variant Study allows large-scale analyses of breast and prostate cancer risk in relation to genetic polymorphisms and gene-environment interactions that affect hormone metabolism.

  4. Single-trabecula building block for large-scale finite element models of cancellous bone.

    PubMed

    Dagan, D; Be'ery, M; Gefen, A

    2004-07-01

    Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.

  5. Tools for phospho- and glycoproteomics of plasma membranes.

    PubMed

    Wiśniewski, Jacek R

    2011-07-01

    Analysis of plasma membrane proteins and their posttranslational modifications is considered as important for identification of disease markers and targets for drug treatment. Due to their insolubility in water, studying of plasma membrane proteins using mass spectrometry has been difficult for a long time. Recent technological developments in sample preparation together with important improvements in mass spectrometric analysis have facilitated analysis of these proteins and their posttranslational modifications. Now, large scale proteomic analyses allow identification of thousands of membrane proteins from minute amounts of sample. Optimized protocols for affinity enrichment of phosphorylated and glycosylated peptides have set new dimensions in the depth of characterization of these posttranslational modifications of plasma membrane proteins. Here, I summarize recent advances in proteomic technology for the characterization of the cell surface proteins and their modifications. In the focus are approaches allowing large scale mapping rather than analytical methods suitable for studying individual proteins or non-complex mixtures.

  6. 200-W single frequency laser based on short active double clad tapered fiber

    NASA Astrophysics Data System (ADS)

    Pierre, Christophe; Guiraud, Germain; Yehouessi, Jean-Paul; Santarelli, Giorgio; Boullet, Johan; Traynor, Nicholas; Vincont, Cyril

    2018-02-01

    High power single frequency lasers are very attractive for a wide range of applications such as nonlinear conversion, gravitational wave sensing or atom trapping. Power scaling in single frequency regime is a challenging domain of research. In fact, nonlinear effect as stimulated Brillouin scattering (SBS) is the primary power limitation in single frequency amplifiers. To mitigate SBS, different well-known techniques has been improved. These techniques allow generation of several hundred of watts [1]. Large mode area (LMA) fibers, transverse acoustically tailored fibers [2], coherent beam combining and also tapered fiber [3] seem to be serious candidates to continue the power scaling. We have demonstrated the generation of stable 200W output power with nearly diffraction limited output, and narrow linewidth (Δν<30kHz) by using a tapered Yb-doped fiber which allow an adiabatic transition from a small purely single mode input to a large core output.

  7. Assessing the Challenges in the Application of Potential Probiotic Lactic Acid Bacteria in the Large-Scale Fermentation of Spanish-Style Table Olives.

    PubMed

    Rodríguez-Gómez, Francisco; Romero-Gil, Verónica; Arroyo-López, Francisco N; Roldán-Reyes, Juan C; Torres-Gallardo, Rosa; Bautista-Gallego, Joaquín; García-García, Pedro; Garrido-Fernández, Antonio

    2017-01-01

    This work studies the inoculation conditions for allowing the survival/predominance of a potential probiotic strain ( Lactobacillus pentosus TOMC-LAB2) when used as a starter culture in large-scale fermentations of green Spanish-style olives. The study was performed in two successive seasons (2011/2012 and 2012/2013), using about 150 tons of olives. Inoculation immediately after brining (to prevent wild initial microbiota growth) followed by re-inoculation 24 h later (to improve competitiveness) was essential for inoculum predominance. Processing early in the season (September) showed a favorable effect on fermentation and strain predominance on olives (particularly when using acidified brines containing 25 L HCl/vessel) but caused the disappearance of the target strain from both brines and olives during the storage phase. On the contrary, processing in October slightly reduced the target strain predominance on olives (70-90%) but allowed longer survival. The type of inoculum used (laboratory vs. industry pre-adapted) never had significant effects. Thus, this investigation discloses key issues for the survival and predominance of starter cultures in large-scale industrial fermentations of green Spanish-style olives. Results can be of interest for producing probiotic table olives and open new research challenges on the causes of inoculum vanishing during the storage phase.

  8. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  9. The Role of Training Allowances in Incentivising the Behaviour of Young People and Employers. Research Report RR756

    ERIC Educational Resources Information Center

    Spielhofer, Thomas; Nelson, Julie; O'Donnell, Lisa; Sims, David

    2006-01-01

    The results reported in this document are from a large scale survey of employers, apprentices, Local Learning and Skills Councils (LLSCs), and providers carried out by the National Foundation for Educational Research (NFER) between June 2005 and March 2006, which explored their views and experiences of the use and impact of training allowances.…

  10. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies

    PubMed Central

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948

  11. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies.

    PubMed

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.

  12. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  14. Numerical study of axial turbulent flow over long cylinders

    NASA Technical Reports Server (NTRS)

    Neves, J. C.; Moin, P.; Moser, R. D.

    1991-01-01

    The effects of transverse curvature are investigated by means of direct numerical simulations of turbulent axial flow over cylinders. Two cases of Reynolds number of about 3400 and layer-thickness-to-cylinder-radius ratios of 5 and 11 were simulated. All essential turbulence scales were resolved in both calculations, and a large number of turbulence statistics were computed. The results are compared with the plane channel results of Kim et al. (1987) and with experiments. With transverse curvature the skin friction coefficient increases and the turbulence statistics, when scaled with wall units, are lower than in the plane channel. The momentum equation provides a scaling that collapses the cylinder statistics, and allows the results to be interpreted in light of the plane channel flow. The azimuthal and radial length scales of the structures in the flow are of the order of the cylinder diameter. Boomerang-shaped structures with large spanwise length scales were observed in the flow.

  15. Truancy and Well-Being among Secondary School Pupils in England

    ERIC Educational Resources Information Center

    Attwood, Gaynor; Croll, Paul

    2015-01-01

    The paper considers two problematic aspects of the lives of young people: the long-standing issues of truancy from school and more recent concerns about the extent of mental well-being. It uses data from a large-scale survey, the Longitudinal Study of Young People in England (LSYPE). LSYPE provides a very large sample which allows for robust…

  16. Low-energy transmission electron diffraction and imaging of large-area graphene

    PubMed Central

    Zhao, Wei; Xia, Bingyu; Lin, Li; Xiao, Xiaoyang; Liu, Peng; Lin, Xiaoyang; Peng, Hailin; Zhu, Yuanmin; Yu, Rong; Lei, Peng; Wang, Jiangtao; Zhang, Lina; Xu, Yong; Zhao, Mingwen; Peng, Lianmao; Li, Qunqing; Duan, Wenhui; Liu, Zhongfan; Fan, Shoushan; Jiang, Kaili

    2017-01-01

    Two-dimensional (2D) materials have attracted interest because of their excellent properties and potential applications. A key step in realizing industrial applications is to synthesize wafer-scale single-crystal samples. Until now, single-crystal samples, such as graphene domains up to the centimeter scale, have been synthesized. However, a new challenge is to efficiently characterize large-area samples. Currently, the crystalline characterization of these samples still relies on selected-area electron diffraction (SAED) or low-energy electron diffraction (LEED), which is more suitable for characterizing very small local regions. This paper presents a highly efficient characterization technique that adopts a low-energy electrostatically focused electron gun and a super-aligned carbon nanotube (SACNT) film sample support. It allows rapid crystalline characterization of large-area graphene through a single photograph of a transmission-diffracted image at a large beam size. Additionally, the low-energy electron beam enables the observation of a unique diffraction pattern of adsorbates on the suspended graphene at room temperature. This work presents a simple and convenient method for characterizing the macroscopic structures of 2D materials, and the instrument we constructed allows the study of the weak interaction with 2D materials. PMID:28879233

  17. Low-energy transmission electron diffraction and imaging of large-area graphene.

    PubMed

    Zhao, Wei; Xia, Bingyu; Lin, Li; Xiao, Xiaoyang; Liu, Peng; Lin, Xiaoyang; Peng, Hailin; Zhu, Yuanmin; Yu, Rong; Lei, Peng; Wang, Jiangtao; Zhang, Lina; Xu, Yong; Zhao, Mingwen; Peng, Lianmao; Li, Qunqing; Duan, Wenhui; Liu, Zhongfan; Fan, Shoushan; Jiang, Kaili

    2017-09-01

    Two-dimensional (2D) materials have attracted interest because of their excellent properties and potential applications. A key step in realizing industrial applications is to synthesize wafer-scale single-crystal samples. Until now, single-crystal samples, such as graphene domains up to the centimeter scale, have been synthesized. However, a new challenge is to efficiently characterize large-area samples. Currently, the crystalline characterization of these samples still relies on selected-area electron diffraction (SAED) or low-energy electron diffraction (LEED), which is more suitable for characterizing very small local regions. This paper presents a highly efficient characterization technique that adopts a low-energy electrostatically focused electron gun and a super-aligned carbon nanotube (SACNT) film sample support. It allows rapid crystalline characterization of large-area graphene through a single photograph of a transmission-diffracted image at a large beam size. Additionally, the low-energy electron beam enables the observation of a unique diffraction pattern of adsorbates on the suspended graphene at room temperature. This work presents a simple and convenient method for characterizing the macroscopic structures of 2D materials, and the instrument we constructed allows the study of the weak interaction with 2D materials.

  18. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    PubMed

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  19. A transportable Paul-trap for levitation and accurate positioning of micron-scale particles in vacuum for laser-plasma experiments

    NASA Astrophysics Data System (ADS)

    Ostermayr, T. M.; Gebhard, J.; Haffa, D.; Kiefer, D.; Kreuzer, C.; Allinger, K.; Bömer, C.; Braenzel, J.; Schnürer, M.; Cermak, I.; Schreiber, J.; Hilz, P.

    2018-01-01

    We report on a Paul-trap system with large access angles that allows positioning of fully isolated micrometer-scale particles with micrometer precision as targets in high-intensity laser-plasma interactions. This paper summarizes theoretical and experimental concepts of the apparatus as well as supporting measurements that were performed for the trapping process of single particles.

  20. The Chandra Deep Wide-Field Survey: Completing the new generation of Chandra extragalactic surveys

    NASA Astrophysics Data System (ADS)

    Hickox, Ryan

    2016-09-01

    Chandra X-ray surveys have revolutionized our view of the growth of black holes across cosmic time. Recently, fundamental questions have emerged about the connection of AGN to their host large scale structures that clearly demand a wide, deep survey over a large area, comparable to the recent extensive Chandra surveys in smaller fields. We propose the Chandra Deep Wide-Field Survey (CDWFS) covering the central 6 sq. deg in the Bootes field, totaling 1.025 Ms (building on 550 ks from the HRC GTO program). CDWFS will efficiently probe a large cosmic volume, allowing us to carry out accurate new investigations of the connections between black holes and their large-scale structures, and will complete the next generation surveys that comprise a key part of Chandra's legacy.

  1. Methodology for enabling high-throughput simultaneous saccharification and fermentation screening of yeast using solid biomass as a substrate.

    PubMed

    Elliston, Adam; Wood, Ian P; Soucouri, Marie J; Tantale, Rachelle J; Dicks, Jo; Roberts, Ian N; Waldron, Keith W

    2015-01-01

    High-throughput (HTP) screening is becoming an increasingly useful tool for collating biological data which would otherwise require the employment of excessive resources. Second generation biofuel production is one such process. HTP screening allows the investigation of large sample sets to be undertaken with increased speed and cost effectiveness. This paper outlines a methodology that will enable solid lignocellulosic substrates to be hydrolyzed and fermented at a 96-well plate scale, facilitating HTP screening of ethanol production, whilst maintaining repeatability similar to that achieved at a larger scale. The results showed that utilizing sheets of biomass of consistent density (handbills), for paper, and slurries of pretreated biomass that could be pipetted allowed standardized and accurate transfers to 96-well plates to be achieved (±3.1 and 1.7%, respectively). Processing these substrates by simultaneous saccharification and fermentation (SSF) at various volumes showed no significant difference on final ethanol yields, either at standard shake flask (200 mL), universal bottle (10 mL) or 96-well plate (1 mL) scales. Substrate concentrations of up to 10% (w/v) were trialed successfully for SSFs at 1 mL volume. The methodology was successfully tested by showing the effects of steam explosion pretreatment on both oilseed rape and wheat straws. This methodology could be used to replace large shake flask reactions with comparatively fast 96-well plate SSF assays allowing for HTP experimentation. Additionally this method is compatible with a number of standardized assay techniques such as simple colorimetric, High-performance liquid chromatography (HPLC) and Nuclear magnetic resonance (NMR) spectroscopy. Furthermore this research has practical uses in the biorefining of biomass substrates for second generation biofuels and novel biobased chemicals by allowing HTP SSF screening, which should allow selected samples to be scaled up or studied in more detail.

  2. Efficient hemodynamic event detection utilizing relational databases and wavelet analysis

    NASA Technical Reports Server (NTRS)

    Saeed, M.; Mark, R. G.

    2001-01-01

    Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.

  3. bigSCale: an analytical framework for big-scale single-cell data.

    PubMed

    Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger

    2018-06-01

    Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.

  4. Enzymatic regeneration of adenosine triphosphate cofactor

    NASA Technical Reports Server (NTRS)

    Marshall, D. L.

    1974-01-01

    Regenerating adenosine triphosphate (ATP) from adenosine diphosphate (ADP) by enzymatic process which utilizes carbamyl phosphate as phosphoryl donor is technique used to regenerate expensive cofactors. Process allows complex enzymatic reactions to be considered as candidates for large-scale continuous processes.

  5. Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.

    PubMed

    Luhmann, Christian C; Rajaram, Suparna

    2015-12-01

    The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information. © The Author(s) 2015.

  6. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  7. Supermassive Black Holes and Galaxy Evolution

    NASA Technical Reports Server (NTRS)

    Merritt, D.

    2004-01-01

    Supermassive black holes appear to be generic components of galactic nuclei. The formation and growth of black holes is intimately connected with the evolution of galaxies on a wide range of scales. For instance, mergers between galaxies containing nuclear black holes would produce supermassive binaries which eventually coalesce via the emission of gravitational radiation. The formation and decay of these binaries is expected to produce a number of observable signatures in the stellar distribution. Black holes can also affect the large-scale structure of galaxies by perturbing the orbits of stars that pass through the nucleus. Large-scale N-body simulations are beginning to generate testable predictions about these processes which will allow us to draw inferences about the formation history of supermassive black holes.

  8. Stable isotope probing to study functional components of complex microbial ecosystems.

    PubMed

    Mazard, Sophie; Schäfer, Hendrik

    2014-01-01

    This protocol presents a method of dissecting the DNA or RNA of key organisms involved in a specific biochemical process within a complex ecosystem. Stable isotope probing (SIP) allows the labelling and separation of nucleic acids from community members that are involved in important biochemical transformations, yet are often not the most numerically abundant members of a community. This pure culture-independent technique circumvents limitations of traditional microbial isolation techniques or data mining from large-scale whole-community metagenomic studies to tease out the identities and genomic repertoires of microorganisms participating in biological nutrient cycles. SIP experiments can be applied to virtually any ecosystem and biochemical pathway under investigation provided a suitable stable isotope substrate is available. This versatile methodology allows a wide range of analyses to be performed, from fatty-acid analyses, community structure and ecology studies, and targeted metagenomics involving nucleic acid sequencing. SIP experiments provide an effective alternative to large-scale whole-community metagenomic studies by specifically targeting the organisms or biochemical transformations of interest, thereby reducing the sequencing effort and time-consuming bioinformatics analyses of large datasets.

  9. Closing in on the large-scale CMB power asymmetry

    NASA Astrophysics Data System (ADS)

    Contreras, D.; Hutchinson, J.; Moss, A.; Scott, D.; Zibin, J. P.

    2018-03-01

    Measurements of the cosmic microwave background (CMB) temperature anisotropies have revealed a dipolar asymmetry in power at the largest scales, in apparent contradiction with the statistical isotropy of standard cosmological models. The significance of the effect is not very high, and is dependent on a posteriori choices. Nevertheless, a number of models have been proposed that produce a scale-dependent asymmetry. We confront several such models for a physical, position-space modulation with CMB temperature observations. We find that, while some models that maintain the standard isotropic power spectrum are allowed, others, such as those with modulated tensor or uncorrelated isocurvature modes, can be ruled out on the basis of the overproduction of isotropic power. This remains the case even when an extra isocurvature mode fully anticorrelated with the adiabatic perturbations is added to suppress power on large scales.

  10. Proceedings of the Joint IAEA/CSNI Specialists` Meeting on Fracture Mechanics Verification by Large-Scale Testing held at Pollard Auditorium, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugh, C.E.; Bass, B.R.; Keeney, J.A.

    This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasismore » was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.« less

  11. Constraints on the Origin of Cosmic Rays above 1018 eV from Large-scale Anisotropy Searches in Data of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antiči'c, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Badescu, A. M.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Buroker, L.; Burton, R. E.; Caballero-Mora, K. S.; Caccianiga, B.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chirinos Diaz, J.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; del Peral, L.; del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gemmeke, H.; Ghia, P. L.; Giller, M.; Gitto, J.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gouffon, P.; Grashorn, E.; Grebe, S.; Griffith, N.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jansen, S.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; LaHurd, D.; Latronico, L.; Lauer, R.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Messina, S.; Meurer, C.; Meyhandan, R.; Mi'canovi'c, S.; Micheletti, M. I.; Minaya, I. A.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niechciol, M.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Oehlschläger, J.; Olinto, A.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Peķala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrolini, A.; Petrov, Y.; Pfendner, C.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Ponce, V. H.; Pontz, M.; Porcelli, A.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Cabo, I.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saftoiu, A.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Sima, O.; 'Smiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tapia, A.; Tartare, M.; Taşcău, O.; Tcaciuc, R.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Westerhoff, S.; Whelan, B. J.; Widom, A.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano Garcia, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.

    2013-01-01

    A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 1018 eV at the Pierre Auger Observatory is reported. For the first time, these large-scale anisotropy searches are performed as a function of both the right ascension and the declination and expressed in terms of dipole and quadrupole moments. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Upper limits on dipole and quadrupole amplitudes are derived under the hypothesis that any cosmic ray anisotropy is dominated by such moments in this energy range. These upper limits provide constraints on the production of cosmic rays above 1018 eV, since they allow us to challenge an origin from stationary galactic sources densely distributed in the galactic disk and emitting predominantly light particles in all directions.

  12. Music in the moment? Revisiting the effect of large scale structures.

    PubMed

    Lalitte, P; Bigand, E

    2006-12-01

    The psychological relevance of large-scale musical structures has been a matter of debate in the music community. This issue was investigated with a method that allows assessing listeners' detection of musical incoherencies in normal and scrambled versions of popular and contemporary music pieces. Musical excerpts were segmented into 28 or 29 chunks. In the scrambled version, the temporal order of these chunks was altered with the constraint that the transitions between two chunks never created local acoustical and musical disruptions. Participants were required (1) to detect on-line incoherent linking of chunks, (2) to rate aesthetic quality of pieces, and (3) to evaluate their overall coherence. The findings indicate a moderate sensitivity to large-scale musical structures for popular and contemporary music in both musically trained and untrained listeners. These data are discussed in light of current models of music cognition.

  13. Predicting protein functions from redundancies in large-scale protein interaction networks

    NASA Technical Reports Server (NTRS)

    Samanta, Manoj Pratim; Liang, Shoudan

    2003-01-01

    Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.

  14. Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos

    NASA Technical Reports Server (NTRS)

    Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.

    1994-01-01

    Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.

  15. Gaussian processes for personalized e-health monitoring with wearable sensors.

    PubMed

    Clifton, Lei; Clifton, David A; Pimentel, Marco A F; Watkinson, Peter J; Tarassenko, Lionel

    2013-01-01

    Advances in wearable sensing and communications infrastructure have allowed the widespread development of prototype medical devices for patient monitoring. However, such devices have not penetrated into clinical practice, primarily due to a lack of research into "intelligent" analysis methods that are sufficiently robust to support large-scale deployment. Existing systems are typically plagued by large false-alarm rates, and an inability to cope with sensor artifact in a principled manner. This paper has two aims: 1) proposal of a novel, patient-personalized system for analysis and inference in the presence of data uncertainty, typically caused by sensor artifact and data incompleteness; 2) demonstration of the method using a large-scale clinical study in which 200 patients have been monitored using the proposed system. This latter provides much-needed evidence that personalized e-health monitoring is feasible within an actual clinical environment, at scale, and that the method is capable of improving patient outcomes via personalized healthcare.

  16. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGES

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  17. Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction

    NASA Technical Reports Server (NTRS)

    Li, Zhijin; Chao, Yi; Li, P. Peggy

    2012-01-01

    A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.

  18. Analytic prediction of baryonic effects from the EFT of large scale structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, Matthew; Perko, Ashley; Senatore, Leonardo, E-mail: mattlew@stanford.edu, E-mail: perko@stanford.edu, E-mail: senatore@stanford.edu

    2015-05-01

    The large scale structures of the universe will likely be the next leading source of cosmological information. It is therefore crucial to understand their behavior. The Effective Field Theory of Large Scale Structures provides a consistent way to perturbatively predict the clustering of dark matter at large distances. The fact that baryons move distances comparable to dark matter allows us to infer that baryons at large distances can be described in a similar formalism: the backreaction of short-distance non-linearities and of star-formation physics at long distances can be encapsulated in an effective stress tensor, characterized by a few parameters. Themore » functional form of baryonic effects can therefore be predicted. In the power spectrum the leading contribution goes as ∝ k{sup 2} P(k), with P(k) being the linear power spectrum and with the numerical prefactor depending on the details of the star-formation physics. We also perform the resummation of the contribution of the long-wavelength displacements, allowing us to consistently predict the effect of the relative motion of baryons and dark matter. We compare our predictions with simulations that contain several implementations of baryonic physics, finding percent agreement up to relatively high wavenumbers such as k ≅ 0.3 hMpc{sup −1} or k ≅ 0.6 hMpc{sup −1}, depending on the order of the calculation. Our results open a novel way to understand baryonic effects analytically, as well as to interface with simulations.« less

  19. Incorporating human-water dynamics in a hyper-resolution land surface model

    NASA Astrophysics Data System (ADS)

    Vergopolan, N.; Chaney, N.; Wanders, N.; Sheffield, J.; Wood, E. F.

    2017-12-01

    The increasing demand for water, energy, and food is leading to unsustainable groundwater and surface water exploitation. As a result, the human interactions with the environment, through alteration of land and water resources dynamics, need to be reflected in hydrologic and land surface models (LSMs). Advancements in representing human-water dynamics still leave challenges related to the lack of water use data, water allocation algorithms, and modeling scales. This leads to an over-simplistic representation of human water use in large-scale models; this is in turn leads to an inability to capture extreme events signatures and to provide reliable information at stakeholder-level spatial scales. The emergence of hyper-resolution models allows one to address these challenges by simulating the hydrological processes and interactions with the human impacts at field scales. We integrated human-water dynamics into HydroBlocks - a hyper-resolution, field-scale resolving LSM. HydroBlocks explicitly solves the field-scale spatial heterogeneity of land surface processes through interacting hydrologic response units (HRUs); and its HRU-based model parallelization allows computationally efficient long-term simulations as well as ensemble predictions. The implemented human-water dynamics include groundwater and surface water abstraction to meet agricultural, domestic and industrial water demands. Furthermore, a supply-demand water allocation scheme based on relative costs helps to determine sectoral water use requirements and tradeoffs. A set of HydroBlocks simulations over the Midwest United States (daily, at 30-m spatial resolution for 30 years) are used to quantify the irrigation impacts on water availability. The model captures large reductions in total soil moisture and water table levels, as well as spatiotemporal changes in evapotranspiration and runoff peaks, with their intensity related to the adopted water management strategy. By incorporating human-water dynamics in a hyper-resolution LSM this work allows for progress on hydrological monitoring and predictions, as well as drought preparedness and water impact assessments at relevant decision-making scales.

  20. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain.

    PubMed

    Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G

    2016-05-25

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.

  1. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain

    PubMed Central

    Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.

    2016-01-01

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162

  2. Large-scale parentage inference with SNPs: an efficient algorithm for statistical confidence of parent pair allocations.

    PubMed

    Anderson, Eric C

    2012-11-08

    Advances in genotyping that allow tens of thousands of individuals to be genotyped at a moderate number of single nucleotide polymorphisms (SNPs) permit parentage inference to be pursued on a very large scale. The intergenerational tagging this capacity allows is revolutionizing the management of cultured organisms (cows, salmon, etc.) and is poised to do the same for scientific studies of natural populations. Currently, however, there are no likelihood-based methods of parentage inference which are implemented in a manner that allows them to quickly handle a very large number of potential parents or parent pairs. Here we introduce an efficient likelihood-based method applicable to the specialized case of cultured organisms in which both parents can be reliably sampled. We develop a Markov chain representation for the cumulative number of Mendelian incompatibilities between an offspring and its putative parents and we exploit it to develop a fast algorithm for simulation-based estimates of statistical confidence in SNP-based assignments of offspring to pairs of parents. The method is implemented in the freely available software SNPPIT. We describe the method in detail, then assess its performance in a large simulation study using known allele frequencies at 96 SNPs from ten hatchery salmon populations. The simulations verify that the method is fast and accurate and that 96 well-chosen SNPs can provide sufficient power to identify the correct pair of parents from amongst millions of candidate pairs.

  3. Large scale Full QM-MD investigation of small peptides and insulin adsorption on ideal and defective TiO2 (1 0 0) surfaces. Influence of peptide size on interfacial bonds

    NASA Astrophysics Data System (ADS)

    Dubot, Pierre; Boisseau, Nicolas; Cenedese, Pierre

    2018-05-01

    Large biomolecule interaction with oxide surface has attracted a lot of attention because it drives behavior of implanted devices in the living body. To investigate the role of TiO2 surface structure on a large polypeptide (insulin) adsorption, we use a homemade mixed Molecular Dynamics-Full large scale Quantum Mechanics code. A specific re-parameterized (Ti) and globally convergent NDDO method fitted on high level ab initio method (coupled cluster CCSD(T) and DFT) allows us to safely describe the electronic structure of the whole insulin-TiO2 surface system (up to 4000 atoms). Looking specifically at carboxylate residues, we demonstrate in this work that specific interfacial bonds are obtained from the insulin/TiO2 system that are not observed in the case of smaller peptides (tripeptides, insulin segment chains with different configurations). We also demonstrate that a large part of the adsorption energy is compensated by insulin conformational energy changes and surface defects enhanced this trend. Large slab dimensions allow us to take into account surface defects that are actually beyond ab initio capabilities owing to size effect. These results highlight the influence of the surface structure on the conformation and therefore of the possible inactivity of an adsorbed polypeptides.

  4. Detectability of large-scale power suppression in the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan

    2010-12-01

    Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.

  5. Mars' Magnetic Atmosphere: Ionospheric Currents, Lightning (or Not), E and M Subsurface Sounding, and Future Missions

    NASA Technical Reports Server (NTRS)

    Espley, J. R.; Connerney, J. E. P.

    2014-01-01

    Mars' ionosphere has no obvious magnetic signs of large-scale, dustproduced lightning. However, there are numerous interesting ionospheric currents (some associated with crustal magnetic fields) which would allow for E&M subsurface sounding.

  6. Large-scale frequency- and time-domain quantum entanglement over the optical frequency comb (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pfister, Olivier

    2017-05-01

    When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.

  7. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    NASA Astrophysics Data System (ADS)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  8. The iMoD display: considerations and challenges in fabricating MOEMS on large area glass substrates

    NASA Astrophysics Data System (ADS)

    Chui, Clarence; Floyd, Philip D.; Heald, David; Arbuckle, Brian; Lewis, Alan; Kothari, Manish; Cummings, Bill; Palmateer, Lauren; Bos, Jan; Chang, Daniel; Chiang, Jedi; Wang, Li-Ming; Pao, Edmon; Su, Fritz; Huang, Vincent; Lin, Wen-Jian; Tang, Wen-Chung; Yeh, Jia-Jiun; Chan, Chen-Chun; Shu, Fang-Ann; Ju, Yuh-Diing

    2007-01-01

    QUALCOMM has developed and transferred to manufacturing iMoD displays, a MEMS-based reflective display technology. The iMoD array architecture allows for development at wafer scale, yet easily scales up to enable fabrication on flat-panel display (FPD) lines. In this paper, we will describe the device operation, process flow and fabrication, technology transfer issues, and display performance.

  9. Teacher Beliefs, Teacher Concerns, and School Leadership Support as Influences on School Readiness for Implementing a Research-Based Reform Model

    ERIC Educational Resources Information Center

    Carhart, Elizabeth Hoag

    2013-01-01

    Federal policy makers and school leaders increasingly recognize middle school math as a turning point in students' academic success. An i3 scale-up grant allowed grant partners to conduct a large-scale implementation of PowerTeaching (PT), a research-based reform to increase student math achievement. In a mixed-methods study during the pilot phase…

  10. Minimal non-abelian supersymmetric Twin Higgs

    DOE PAGES

    Badziak, Marcin; Harigaya, Keisuke

    2017-10-17

    We propose a minimal supersymmetric Twin Higgs model that can accommodate tuning of the electroweak scale for heavy stops better than 10% with high mediation scales of supersymmetry breaking. A crucial ingredient of this model is a new SU(2) X gauge symmetry which provides a D-term potential that generates a large SU(4) invariant coupling for the Higgs sector and only small set of particles charged under SU(2) X , which allows the model to be perturbative around the Planck scale. The new gauge interaction drives the top yukawa coupling small at higher energy scales, which also reduces the tuning.

  11. 2:1 for naturalness at the LHC?

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Nima; Blum, Kfir; D'Agnolo, Raffaele Tito; Fan, JiJi

    2013-01-01

    A large enhancement of a factor of 1.5 - 2 in Higgs production and decay in the diphoton channel, with little deviation in the ZZ channel, can only plausibly arise from a loop of new charged particles with large couplings to the Higgs. We show that, allowing only new fermions with marginal interactions at the weak scale, the required Yukawa couplings for a factor of 2 enhancement are so large that the Higgs quartic coupling is pushed to large negative values in the UV, triggering an unacceptable vacuum instability far beneath the 10 TeV scale. An enhancement by a factor of 1.5 can be accommodated if the charged particles are lighter than 150 GeV, within reach of discovery in almost all cases in the 8 TeV run at the LHC, and in even the most difficult cases at 14 TeV. Thus if the diphoton enhancement survives further scrutiny, and no charged particles beneath 150 GeV are found, there must be new bosons far beneath the 10 TeV scale. This would unambiguously rule out a large class of fine-tuned theories for physics beyond the Standard Model, including split SUSY and many of its variants, and provide strong circumstantial evidence for a natural theory of electroweak symmetry breaking at the TeV scale. Alternately, theories with only a single fine-tuned Higgs and new fermions at the weak scale, with no additional scalars or gauge bosons up to a cutoff much larger than the 10 TeV scale, unambiguously predict that the hints for a large diphoton enhancement in the current data will disappear.

  12. Hydropower and sustainability: resilience and vulnerability in China's powersheds.

    PubMed

    McNally, Amy; Magee, Darrin; Wolf, Aaron T

    2009-07-01

    Large dams represent a whole complex of social, economic and ecological processes, perhaps more than any other large infrastructure project. Today, countries with rapidly developing economies are constructing new dams to provide energy and flood control to growing populations in riparian and distant urban communities. If the system is lacking institutional capacity to absorb these physical and institutional changes there is potential for conflict, thereby threatening human security. In this paper, we propose analyzing sustainability (political, socioeconomic, and ecological) in terms of resilience versus vulnerability, framed within the spatial abstraction of a powershed. The powershed framework facilitates multi-scalar and transboundary analysis while remaining focused on the questions of resilience and vulnerability relating to hydropower dams. Focusing on examples from China, this paper describes the complex nature of dams using the sustainability and powershed frameworks. We then analyze the roles of institutions in China to understand the relationships between power, human security and the socio-ecological system. To inform the study of conflicts over dams China is a particularly useful case study because we can examine what happens at the international, national and local scales. The powershed perspective allows us to examine resilience and vulnerability across political boundaries from a dynamic, process-defined analytical scale while remaining focused on a host of questions relating to hydro-development that invoke drivers and impacts on national and sub-national scales. The ability to disaggregate the affects of hydropower dam construction from political boundaries allows for a deeper analysis of resilience and vulnerability. From our analysis we find that reforms in China's hydropower sector since 1996 have been motivated by the need to create stability at the national scale rather than resilient solutions to China's growing demand for energy and water resource control at the local and international scales. Some measures that improved economic development through the market economy and a combination of dam construction and institutional reform may indeed improve hydro-political resilience at a single scale. However, if China does address large-scale hydropower construction's potential to create multi-scale geopolitical tensions, they may be vulnerable to conflict - though not necessarily violent - in domestic and international political arenas. We conclude with a look toward a resilient basin institution for the Nu/Salween River, the site of a proposed large-scale hydropower development effort in China and Myanmar.

  13. Genes mirror geography in Daphnia magna.

    PubMed

    Fields, Peter D; Reisser, Céline; Dukić, Marinela; Haag, Christoph R; Ebert, Dieter

    2015-09-01

    Identifying the presence and magnitude of population genetic structure remains a major consideration in evolutionary biology as doing so allows one to understand the demographic history of a species as well as make predictions of how the evolutionary process will proceed. Next-generation sequencing methods allow us to reconsider previous ideas and conclusions concerning the distribution of genetic variation, and what this distribution implies about a given species evolutionary history. A previous phylogeographic study of the crustacean Daphnia magna suggested that, despite strong genetic differentiation among populations at a local scale, the species shows only moderate genetic structure across its European range, with a spatially patchy occurrence of individual lineages. We apply RAD sequencing to a sample of D. magna collected across a wide swath of the species' Eurasian range and analyse the data using principle component analysis (PCA) of genetic variation and Procrustes analytical approaches, to quantify spatial genetic structure. We find remarkable consistency between the first two PCA axes and the geographic coordinates of individual sampling points, suggesting that, on a continent-wide scale, genetic differentiation is driven to a large extent by geographic distance. The observed pattern is consistent with unimpeded (i.e. no barriers, landscape or otherwise) migration at large spatial scales, despite the fragmented and patchy nature of favourable habitats at local scales. With high-resolution genetic data similar patterns may be uncovered for other species with wide geographic distributions, allowing an increased understanding of how genetic drift and selection have shaped their evolutionary history. © 2015 John Wiley & Sons Ltd.

  14. Cirrus cloud development in a mobile upper tropospheric trough: The November 26th FIRE cirrus case study

    NASA Technical Reports Server (NTRS)

    Mace, Gerald G.; Ackerman, Thomas P.

    1993-01-01

    The period from 18 UTC 26 Nov. 1991 to roughly 23 UTC 26 Nov. 1991 is one of the study periods of the FIRE (First International Satellite Cloud Climatology Regional Experiment) 2 field campaign. The middle and upper tropospheric cloud data that was collected during this time allowed FIRE scientists to learn a great deal about the detailed structure, microphysics, and radiative characteristics of the mid latitude cirrus that occurred during that time. Modeling studies that range from the microphysical to the mesoscale are now underway attempting to piece the detailed knowledge of this cloud system into a coherent picture of the atmospheric processes important to cirrus cloud development and maintenance. An important component of the modeling work, either as an input parameter in the case of cloud-scale models, or as output in the case of meso and larger scale models, is the large scale forcing of the cloud system. By forcing we mean the synoptic scale vertical motions and moisture budget that initially send air parcels ascending and supply the water vapor to allow condensation during ascent. Defining this forcing from the synoptic scale to the cloud scale is one of the stated scientific objectives of the FIRE program. From the standpoint of model validation, it is also necessary that the vertical motions and large scale moisture budget of the case studies be derived from observations. It is considered important that the models used to simulate the observed cloud fields begin with the correct dynamics and that the dynamics be in the right place for the right reasons.

  15. An extended moderate-depth contiguous layer of the Chandra Bootes field - additional pointings

    NASA Astrophysics Data System (ADS)

    Kraft, Ralph

    2016-09-01

    We propose 150ks (6x25ks) ACIS-I observations to supplement existing X-ray data in XBootes. These new observations will allow the expansion of relatively large contiguous ( 2deg2) region in Bootes covered at 40ks, i.e., 5-8x deeper than the nominal Bootes field. In concert with the recently approved 1.025 Ms Chandra Deep Wide-Field Survey, this additional deep layer of Bootes will (1) provide new insights into the dark matter halos and large-scale structures that host AGN; (2) allow new measurements of the distribution of X-ray luminosities and connections to host galaxy evolution.

  16. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  17. Scalable clustering algorithms for continuous environmental flow cytometry.

    PubMed

    Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill

    2016-02-01

    Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Deformation of leaky-dielectric fluid globules under strong electric fields: Boundary layers and jets at large Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Schnitzer, Ory; Frankel, Itzchak; Yariv, Ehud

    2013-11-01

    In Taylor's theory of electrohydrodynamic drop deformation (Proc. R. Soc. Lond. A, vol. 291, 1966, pp. 159-166), inertia is neglected at the outset, resulting in fluid velocity that scales as the square of the applied-field magnitude. For large drops, with increasing field strength the Reynolds number predicted by this scaling may actually become large, suggesting the need for a complementary large-Reynolds-number investigation. Balancing viscous stresses and electrical shear forces in this limit reveals a different velocity scaling, with the 4/3-power of the applied-field magnitude. We focus here on the flow over a gas bubble. It is essentially confined to two boundary layers propagating from the poles to the equator, where they collide to form a radial jet. At leading order in the Capillary number, the bubble deforms due to (i) Maxwell stresses; (ii) the hydrodynamic boundary-layer pressure associated with centripetal acceleration; and (iii) the intense pressure distribution acting over the narrow equatorial deflection zone, appearing as a concentrated load. Remarkably, the unique flow topology and associated scalings allow to obtain a closed-form expression for this deformation through application of integral mass and momentum balances. On the bubble scale, the concentrated pressure load is manifested in the appearance of a non-smooth equatorial dimple.

  19. Ribbons characterize magnetohydrodynamic magnetic fields better than lines: a lesson from dynamo theory

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.; Hubbard, Alexander

    2014-08-01

    Blackman and Brandenburg argued that magnetic helicity conservation in dynamo theory can in principle be captured by diagrams of mean field dynamos when the magnetic fields are represented by ribbons or tubes, but not by lines. Here, we present such a schematic ribbon diagram for the α2 dynamo that tracks magnetic helicity and provides distinct scales of large-scale magnetic helicity, small-scale magnetic helicity, and kinetic helicity involved in the process. This also motivates our construction of a new `2.5 scale' minimalist generalization of the helicity-evolving equations for the α2 dynamo that separately allows for these three distinct length-scales while keeping only two dynamical equations. We solve these equations and, as in previous studies, find that the large-scale field first grows at a rate independent of the magnetic Reynolds number RM before quenching to an RM-dependent regime. But we also show that the larger the ratio of the wavenumber where the small-scale current helicity resides to that of the forcing scale, the earlier the non-linear dynamo quenching occurs, and the weaker the large-scale field is at the turnoff from linear growth. The harmony between the theory and the schematic diagram exemplifies a general lesson that magnetic fields in magnetohydrodynamic are better visualized as two-dimensional ribbons (or pairs of lines) rather than single lines.

  20. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.

  1. Strain localization in models and nature: bridging the gaps.

    NASA Astrophysics Data System (ADS)

    Burov, E.; Francois, T.; Leguille, J.

    2012-04-01

    Mechanisms of strain localization and their role in tectonic evolution are still largely debated. Indeed, the laboratory data on strain localization processes are not abundant, they do not cover the entire range of possible mechanisms and have to be extrapolated, sometimes with greatest uncertainties, to geological scales while the observations of localization processes at outcrop scale are scarce, not always representative, and usually are difficult to quantify. Numerical thermo-mechanical models allow us to investigate the relative importance of some of the localization processes whether they are hypothesized or observed at laboratory or outcrop scale. The numerical models can test different observationally or analytically derived laws in terms of their applicability to natural scales and tectonic processes. The models are limited, however, in their capacity of reproduction of physical mechanisms, and necessary simplify the softening laws leading to "numerical" localization. Numerical strain localization is also limited by grid resolution and the ability of specific numerical codes to handle large strains and the complexity of the associated physical phenomena. Hence, multiple iterations between observations and models are needed to elucidate the causes of strain localization in nature. We here investigate the relative impact of different weakening laws on localization of deformation using large-strain thermo-mechanical models. We test using several "generic" rifting and collision settings, the implications of structural softening, tectonic heritage, shear heating, friction angle and cohesion softening, ductile softening (mimicking grain-size reduction) as well as of a number of other mechanisms such as fluid-assisted phase changes. The results suggest that different mechanisms of strain localization may interfere in nature, yet it most cases it is not evident to establish quantifiable links between the laboratory data and the best-fitting parameters of the effective softening laws that allow to reproduce large scale tectonic evolution. For example, one of most effective and widely used mechanisms of "numerical" strain localization is friction angle softening. Yet, namely this law appears to be most difficult to justify from physical and observational grounds.

  2. Large-scale photospheric motions determined from granule tracking and helioseismology from SDO/HMI data

    NASA Astrophysics Data System (ADS)

    Roudier, Th.; Švanda, M.; Ballot, J.; Malherbe, J. M.; Rieutord, M.

    2018-04-01

    Context. Large-scale flows in the Sun play an important role in the dynamo process linked to the solar cycle. The important large-scale flows are the differential rotation and the meridional circulation with an amplitude of km s-1 and few m s-1, respectively. These flows also have a cycle-related components, namely the torsional oscillations. Aim. Our attempt is to determine large-scale plasma flows on the solar surface by deriving horizontal flow velocities using the techniques of solar granule tracking, dopplergrams, and time-distance helioseismology. Methods: Coherent structure tracking (CST) and time-distance helioseismology were used to investigate the solar differential rotation and meridional circulation at the solar surface on a 30-day HMI/SDO sequence. The influence of a large sunspot on these large-scale flows with a specific 7-day HMI/SDO sequence has been also studied. Results: The large-scale flows measured by the CST on the solar surface and the same flow determined from the same data with the helioseismology in the first 1 Mm below the surface are in good agreement in amplitude and direction. The torsional waves are also located at the same latitudes with amplitude of the same order. We are able to measure the meridional circulation correctly using the CST method with only 3 days of data and after averaging between ± 15° in longitude. Conclusions: We conclude that the combination of CST and Doppler velocities allows us to detect properly the differential solar rotation and also smaller amplitude flows such as the meridional circulation and torsional waves. The results of our methods are in good agreement with helioseismic measurements.

  3. A large-scale forest fragmentation experiment: the Stability of Altered Forest Ecosystems Project.

    PubMed

    Ewers, Robert M; Didham, Raphael K; Fahrig, Lenore; Ferraz, Gonçalo; Hector, Andy; Holt, Robert D; Kapos, Valerie; Reynolds, Glen; Sinun, Waidi; Snaddon, Jake L; Turner, Edgar C

    2011-11-27

    Opportunities to conduct large-scale field experiments are rare, but provide a unique opportunity to reveal the complex processes that operate within natural ecosystems. Here, we review the design of existing, large-scale forest fragmentation experiments. Based on this review, we develop a design for the Stability of Altered Forest Ecosystems (SAFE) Project, a new forest fragmentation experiment to be located in the lowland tropical forests of Borneo (Sabah, Malaysia). The SAFE Project represents an advance on existing experiments in that it: (i) allows discrimination of the effects of landscape-level forest cover from patch-level processes; (ii) is designed to facilitate the unification of a wide range of data types on ecological patterns and processes that operate over a wide range of spatial scales; (iii) has greater replication than existing experiments; (iv) incorporates an experimental manipulation of riparian corridors; and (v) embeds the experimentally fragmented landscape within a wider gradient of land-use intensity than do existing projects. The SAFE Project represents an opportunity for ecologists across disciplines to participate in a large initiative designed to generate a broad understanding of the ecological impacts of tropical forest modification.

  4. A large-scale forest fragmentation experiment: the Stability of Altered Forest Ecosystems Project

    PubMed Central

    Ewers, Robert M.; Didham, Raphael K.; Fahrig, Lenore; Ferraz, Gonçalo; Hector, Andy; Holt, Robert D.; Kapos, Valerie; Reynolds, Glen; Sinun, Waidi; Snaddon, Jake L.; Turner, Edgar C.

    2011-01-01

    Opportunities to conduct large-scale field experiments are rare, but provide a unique opportunity to reveal the complex processes that operate within natural ecosystems. Here, we review the design of existing, large-scale forest fragmentation experiments. Based on this review, we develop a design for the Stability of Altered Forest Ecosystems (SAFE) Project, a new forest fragmentation experiment to be located in the lowland tropical forests of Borneo (Sabah, Malaysia). The SAFE Project represents an advance on existing experiments in that it: (i) allows discrimination of the effects of landscape-level forest cover from patch-level processes; (ii) is designed to facilitate the unification of a wide range of data types on ecological patterns and processes that operate over a wide range of spatial scales; (iii) has greater replication than existing experiments; (iv) incorporates an experimental manipulation of riparian corridors; and (v) embeds the experimentally fragmented landscape within a wider gradient of land-use intensity than do existing projects. The SAFE Project represents an opportunity for ecologists across disciplines to participate in a large initiative designed to generate a broad understanding of the ecological impacts of tropical forest modification. PMID:22006969

  5. Ultra-large distance modification of gravity from Lorentz symmetry breaking at the Planck scale

    NASA Astrophysics Data System (ADS)

    Gorbunov, Dmitry S.; Sibiryakov, Sergei M.

    2005-09-01

    We present an extension of the Randall-Sundrum model in which, due to spontaneous Lorentz symmetry breaking, graviton mixes with bulk vector fields and becomes quasilocalized. The masses of KK modes comprising the four-dimensional graviton are naturally exponentially small. This allows to push the Lorentz breaking scale to as high as a few tenth of the Planck mass. The model does not contain ghosts or tachyons and does not exhibit the van Dam-Veltman-Zakharov discontinuity. The gravitational attraction between static point masses becomes gradually weaker with increasing of separation and gets replaced by repulsion (antigravity) at exponentially large distances.

  6. Combined process automation for large-scale EEG analysis.

    PubMed

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Upscaling high-quality CVD graphene devices to 100 micron-scale and beyond

    NASA Astrophysics Data System (ADS)

    Lyon, Timothy J.; Sichau, Jonas; Dorn, August; Zurutuza, Amaia; Pesquera, Amaia; Centeno, Alba; Blick, Robert H.

    2017-03-01

    We describe a method for transferring ultra large-scale chemical vapor deposition-grown graphene sheets. These samples can be fabricated as large as several cm2 and are characterized by magneto-transport measurements on SiO2 substrates. The process we have developed is highly effective and limits damage to the graphene all the way through metal liftoff, as shown in carrier mobility measurements and the observation of the quantum Hall effect. The charge-neutral point is shown to move drastically to near-zero gate voltage after a 2-step post-fabrication annealing process, which also allows for greatly diminished hysteresis.

  8. Production regimes in four eastern boundary current systems

    NASA Technical Reports Server (NTRS)

    Carr, M. E.; Kearns, E. J.

    2003-01-01

    High productivity (maxima 3 g C m(sup -2)day(sup -1)) of the Eastern Boundary Currents (EBCs), i.e. the California, Peru-Humboldt, Canary and Benguela Currents, is driven by a combination of local forcing and large-scale circulation. The characteristics of the deep water brought to the surface by upwelling favorable winds depend on the large-scale circulation patterns. Here we use a new hydrographic and nutrient climatology together with satellite measurements ofthe wind vector, sea-surface temperature (SST), chlorophyll concentration, and primary production modeled from ocean color to quantify the meridional and seasonal patterns of upwelling dynamics and biological response. The unprecedented combination of data sets allows us to describe objectively the variability for small regions within each current and to characterize the governing factors for biological production. The temporal and spatial environmental variability was due in most regions to large-scale circulation, alone or in combination with offshore transport (local forcing). The observed meridional and seasonal patterns of biomass and primary production were most highlycorrelated to components representing large-scale circulation. The biomass sustained by a given nutrient concentration in the Atlantic EBCs was twice as large as that of the Pacific EBCs. This apparent greater efficiency may be due toavailability of iron, physical retention, or differences in planktonic community structure.

  9. How institutions shaped the last major evolutionary transition to large-scale human societies

    PubMed Central

    Powers, Simon T.; van Schaik, Carel P.; Lehmann, Laurent

    2016-01-01

    What drove the transition from small-scale human societies centred on kinship and personal exchange, to large-scale societies comprising cooperation and division of labour among untold numbers of unrelated individuals? We propose that the unique human capacity to negotiate institutional rules that coordinate social actions was a key driver of this transition. By creating institutions, humans have been able to move from the default ‘Hobbesian’ rules of the ‘game of life’, determined by physical/environmental constraints, into self-created rules of social organization where cooperation can be individually advantageous even in large groups of unrelated individuals. Examples include rules of food sharing in hunter–gatherers, rules for the usage of irrigation systems in agriculturalists, property rights and systems for sharing reputation between mediaeval traders. Successful institutions create rules of interaction that are self-enforcing, providing direct benefits both to individuals that follow them, and to individuals that sanction rule breakers. Forming institutions requires shared intentionality, language and other cognitive abilities largely absent in other primates. We explain how cooperative breeding likely selected for these abilities early in the Homo lineage. This allowed anatomically modern humans to create institutions that transformed the self-reliance of our primate ancestors into the division of labour of large-scale human social organization. PMID:26729937

  10. Detonation failure characterization of non-ideal explosives

    NASA Astrophysics Data System (ADS)

    Janesheski, Robert S.; Groven, Lori J.; Son, Steven

    2012-03-01

    Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.

  11. Effects of high sound speed confiners on ANFO detonations

    NASA Astrophysics Data System (ADS)

    Kiyanda, Charles; Jackson, Scott; Short, Mark

    2011-06-01

    The interaction between high explosive (HE) detonations and high sound speed confiners, where the confiner sound speed exceeds the HE's detonation speed, has not been thoroughly studied. The subsonic nature of the flow in the confiner allows stress waves to travel ahead of the main detonation front and influence the upstream HE state. The interaction between the detonation wave and the confiner is also no longer a local interaction, so that the confiner thickness now plays a significant role in the detonation dynamics. We report here on larger scale experiments in which a mixture of ammonium nitrate and fuel oil (ANFO) is detonated in aluminium confiners with varying charge diameter and confiner thickness. The results of these large-scale experiments are compared with previous large-scale ANFO experiments in cardboard, as well as smaller-scale aluminium confined ANFO experiments, to characterize the effects of confiner thickness.

  12. Anisotropy of the Cosmic Microwave Background Radiation on Large and Medium Angular Scales

    NASA Technical Reports Server (NTRS)

    Houghton, Anthony; Timbie, Peter

    1998-01-01

    This grant has supported work at Brown University on measurements of the 2.7 K Cosmic Microwave Background Radiation (CMB). The goal has been to characterize the spatial variations in the temperature of the CMB in order to understand the formation of large-scale structure in the universe. We have concurrently pursued two measurements using millimeter-wave telescopes carried aloft by scientific balloons. Both systems operate over a range of wavelengths, chosen to allow spectral removal of foreground sources such as the atmosphere, Galaxy, etc. The angular resolution of approx. 25 arcminutes is near the angular scale at which the most structure is predicted by current models to be visible in the CMB angular power spectrum. The main goal is to determine the angular scale of this structure; in turn we can infer the density parameter, Omega, for the universe as well as other cosmological parameters, such as the Hubble constant.

  13. Development of a large-scale isolation chamber system for the safe and humane care of medium-sized laboratory animals harboring infectious diseases*

    PubMed Central

    Pan, Xin; Qi, Jian-cheng; Long, Ming; Liang, Hao; Chen, Xiao; Li, Han; Li, Guang-bo; Zheng, Hao

    2010-01-01

    The close phylogenetic relationship between humans and non-human primates makes non-human primates an irreplaceable model for the study of human infectious diseases. In this study, we describe the development of a large-scale automatic multi-functional isolation chamber for use with medium-sized laboratory animals carrying infectious diseases. The isolation chamber, including the transfer chain, disinfection chain, negative air pressure isolation system, animal welfare system, and the automated system, is designed to meet all biological safety standards. To create an internal chamber environment that is completely isolated from the exterior, variable frequency drive blowers are used in the air-intake and air-exhaust system, precisely controlling the filtered air flow and providing an air-barrier protection. A double door transfer port is used to transfer material between the interior of the isolation chamber and the outside. A peracetic acid sterilizer and its associated pipeline allow for complete disinfection of the isolation chamber. All of the isolation chamber parameters can be automatically controlled by a programmable computerized menu, allowing for work with different animals in different-sized cages depending on the research project. The large-scale multi-functional isolation chamber provides a useful and safe system for working with infectious medium-sized laboratory animals in high-level bio-safety laboratories. PMID:20872984

  14. Assessing the Challenges in the Application of Potential Probiotic Lactic Acid Bacteria in the Large-Scale Fermentation of Spanish-Style Table Olives

    PubMed Central

    Rodríguez-Gómez, Francisco; Romero-Gil, Verónica; Arroyo-López, Francisco N.; Roldán-Reyes, Juan C.; Torres-Gallardo, Rosa; Bautista-Gallego, Joaquín; García-García, Pedro; Garrido-Fernández, Antonio

    2017-01-01

    This work studies the inoculation conditions for allowing the survival/predominance of a potential probiotic strain (Lactobacillus pentosus TOMC-LAB2) when used as a starter culture in large-scale fermentations of green Spanish-style olives. The study was performed in two successive seasons (2011/2012 and 2012/2013), using about 150 tons of olives. Inoculation immediately after brining (to prevent wild initial microbiota growth) followed by re-inoculation 24 h later (to improve competitiveness) was essential for inoculum predominance. Processing early in the season (September) showed a favorable effect on fermentation and strain predominance on olives (particularly when using acidified brines containing 25 L HCl/vessel) but caused the disappearance of the target strain from both brines and olives during the storage phase. On the contrary, processing in October slightly reduced the target strain predominance on olives (70–90%) but allowed longer survival. The type of inoculum used (laboratory vs. industry pre-adapted) never had significant effects. Thus, this investigation discloses key issues for the survival and predominance of starter cultures in large-scale industrial fermentations of green Spanish-style olives. Results can be of interest for producing probiotic table olives and open new research challenges on the causes of inoculum vanishing during the storage phase. PMID:28567038

  15. Asymptotic theory of time varying networks with burstiness and heterogeneous activation patterns

    NASA Astrophysics Data System (ADS)

    Burioni, Raffaella; Ubaldi, Enrico; Vezzani, Alessandro

    2017-05-01

    The recent availability of large-scale, time-resolved and high quality digital datasets has allowed for a deeper understanding of the structure and properties of many real-world networks. The empirical evidence of a temporal dimension prompted the switch of paradigm from a static representation of networks to a time varying one. In this work we briefly review the framework of time-varying-networks in real world social systems, especially focusing on the activity-driven paradigm. We develop a framework that allows for the encoding of three generative mechanisms that seem to play a central role in the social networks’ evolution: the individual’s propensity to engage in social interactions, its strategy in allocate these interactions among its alters and the burstiness of interactions amongst social actors. The functional forms and probability distributions encoding these mechanisms are typically data driven. A natural question arises if different classes of strategies and burstiness distributions, with different local scale behavior and analogous asymptotics can lead to the same long time and large scale structure of the evolving networks. We consider the problem in its full generality, by investigating and solving the system dynamics in the asymptotic limit, for general classes of ties allocation mechanisms and waiting time probability distributions. We show that the asymptotic network evolution is driven by a few characteristics of these functional forms, that can be extracted from direct measurements on large datasets.

  16. Topological defects in extended inflation

    NASA Technical Reports Server (NTRS)

    Copeland, Edmund J.; Kolb, Edward W.; Liddle, Andrew R.

    1990-01-01

    The production of topological defects, especially cosmic strings, in extended inflation models was considered. In extended inflation, the Universe passes through a first-order phase transition via bubble percolation, which naturally allows defects to form at the end of inflation. The correlation length, which determines the number density of the defects, is related to the mean size of bubbles when they collide. This mechanism allows a natural combination of inflation and large scale structure via cosmic strings.

  17. Mesoscale Dynamical Regimes in the Midlatitudes

    NASA Astrophysics Data System (ADS)

    Craig, G. C.; Selz, T.

    2018-01-01

    The atmospheric mesoscales are characterized by a complex variety of meteorological phenomena that defy simple classification. Here a full space-time spectral analysis is carried out, based on a 7 day convection-permitting simulation of springtime midlatitude weather on a large domain. The kinetic energy is largest at synoptic scales, and on the mesoscale it is largely confined to an "advective band" where space and time scales are related by a constant of proportionality which corresponds to a velocity scale of about 10 m s-1. Computing the relative magnitude of different terms in the governing equations allows the identification of five dynamical regimes. These are tentatively identified as quasi-geostrophic flow, propagating gravity waves, stationary gravity waves related to orography, acoustic modes, and a weak temperature gradient regime, where vertical motions are forced by diabatic heating.

  18. An Expected Value Air Combat Model Simulation Algorithm to Predict Missions Performance in Tactical Air Operations.

    DTIC Science & Technology

    1983-09-01

    Approved by: Me<i W4 1tsZ7 CaifI ,KDpartmento I inistrative Science 3 ( ABSTRACT >This thesis intends to create the basic...a need for a small scale model which allows a student analyst of tactical air operations to create his own battles and to test his own strategies with...iconic model is a large or small-scale repre- sentation of states-objects, or events. For example a scale model airplance resembles the system under the

  19. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  20. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  1. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    NASA Astrophysics Data System (ADS)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.

  2. Hydrological response of karst systems to large-scale climate variability for different catchments of the French karst observatory network INSU/CNRS SNO KARST

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Labat, David; Jourde, Hervé; Lecoq, Nicolas; Mazzilli, Naomi

    2017-04-01

    The french karst observatory network SNO KARST is a national initiative from the National Institute for Earth Sciences and Astronomy (INSU) of the National Center for Scientific Research (CNRS). It is also part of the new french research infrastructure for the observation of the critical zone OZCAR. SNO KARST is composed by several karst sites distributed over conterminous France which are located in different physiographic and climatic contexts (Mediterranean, Pyrenean, Jura mountain, western and northwestern shore near the Atlantic or the English Channel). This allows the scientific community to develop advanced research and experiments dedicated to improve understanding of the hydrological functioning of karst catchments. Here we used several sites of SNO KARST in order to assess the hydrological response of karst catchments to long-term variation of large-scale atmospheric circulation. Using NCEP reanalysis products and karst discharge, we analyzed the links between large-scale circulation and karst water resources variability. As karst hydrosystems are highly heterogeneous media, they behave differently across different time-scales : we explore the large-scale/local-scale relationships according to time-scales using a wavelet multiresolution approach of both karst hydrological variables and large-scale climate fields such as sea level pressure (SLP). The different wavelet components of karst discharge in response to the corresponding wavelet component of climate fields are either 1) compared to physico-chemical/geochemical responses at karst springs, or 2) interpreted in terms of hydrological functioning by comparing discharge wavelet components to internal components obtained from precipitation/discharge models using the KARSTMOD conceptual modeling platform of SNO KARST.

  3. Resolving the Kinetic Reconnection Length Scale in Global Magnetospheric Simulations with MHD-EPIC

    NASA Astrophysics Data System (ADS)

    Toth, G.; Chen, Y.; Cassak, P.; Jordanova, V.; Peng, B.; Markidis, S.; Gombosi, T. I.

    2016-12-01

    We have recently developed a new modeling capability: the Magnetohydrodynamics with Embedded Particle-in-Cell (MHD-EPIC) algorithm with support from Los Alamos SHIELDS and NSF INSPIRE grants. We have implemented MHD-EPIC into the Space Weather Modeling Framework (SWMF) using the implicit Particle-in-Cell (iPIC3D) and the BATS-R-US extended magnetohydrodynamic codes. The MHD-EPIC model allows two-way coupled simulations in two and three dimensions with multiple embedded PIC regions. Both BATS-R-US and iPIC3D are massively parallel codes. The MHD-EPIC approach allows global magnetosphere simulations with embedded kinetic simulations. For small magnetospheres, like Ganymede or Mercury, we can easily resolve the ion scales around the reconnection sites. Modeling the Earth magnetosphere is very challenging even with our efficient MHD-EPIC model due to the large separation between the global and ion scales. On the other hand the large separation of scales may be exploited: the solution may not be sensitive to the ion inertial length as long as it is small relative to the global scales. The ion inertial length can be varied by changing the ion mass while keeping the MHD mass density, the velocity, and pressure the same for the initial and boundary conditions. Our two-dimensional MHD-EPIC simulations for the dayside reconnection region show in fact, that the overall solution is not sensitive to ion inertial length. The shape, size and frequency of flux transfer events are very similar for a wide range of ion masses. Our results mean that 3D MHD-EPIC simulations for the Earth and other large magnetospheres can be made computationally affordable by artificially increasing the ion mass: the required grid resolution and time step in the PIC model are proportional to the ion inertial length. Changing the ion mass by a factor of 4, for example, speeds up the PIC code by a factor of 256. In fact, this approach allowed us to perform an hour-long 3D MHD-EPIC simulations for the Earth magnetosphere.

  4. Achieving a Successful Scale-Down Model and Optimized Economics through Parvovirus Filter Validation using Purified TrueSpikeTM Viruses.

    PubMed

    De Vilmorin, Philippe; Slocum, Ashley; Jaber, Tareq; Schaefer, Oliver; Ruppach, Horst; Genest, Paul

    2015-01-01

    This article describes a four virus panel validation of EMD Millipore's (Bedford, MA) small virus-retentive filter, Viresolve® Pro, using TrueSpike(TM) viruses for a Biogen Idec process intermediate. The study was performed at Charles River Labs in King of Prussia, PA. Greater than 900 L/m(2) filter throughput was achieved with the approximately 8 g/L monoclonal antibody feed. No viruses were detected in any filtrate samples. All virus log reduction values were between ≥3.66 and ≥5.60. The use of TrueSpike(TM) at Charles River Labs allowed Biogen Idec to achieve a more representative scaled-down model and potentially reduce the cost of its virus filtration step and the overall cost of goods. The body of data presented here is an example of the benefits of following the guidance from the PDA Technical Report 47, The Preparation of Virus Spikes Used for Viral Clearance Studies. The safety of biopharmaceuticals is assured through the use of multiple steps in the purification process that are capable of virus clearance, including filtration with virus-retentive filters. The amount of virus present at the downstream stages in the process is expected to be and is typically low. The viral clearance capability of the filtration step is assessed in a validation study. The study utilizes a small version of the larger manufacturing size filter, and a large, known amount of virus is added to the feed prior to filtration. Viral assay before and after filtration allows the virus log reduction value to be quantified. The representativeness of the small-scale model is supported by comparing large-scale filter performance to small-scale filter performance. The large-scale and small-scale filtration runs are performed using the same operating conditions. If the filter performance at both scales is comparable, it supports the applicability of the virus log reduction value obtained with the small-scale filter to the large-scale manufacturing process. However, the virus preparation used to spike the feed material often contains impurities that contribute adversely to virus filter performance in the small-scale model. The added impurities from the virus spike, which are not present at manufacturing scale, compromise the scale-down model and put into question the direct applicability of the virus clearance results. Another consequence of decreased filter performance due to virus spike impurities is the unnecessary over-sizing of the manufacturing system to match the low filter capacity observed in the scale-down model. This article describes how improvements in mammalian virus spike purity ensure the validity of the log reduction value obtained with the scale-down model and support economically optimized filter usage. © PDA, Inc. 2015.

  5. Optical Communications With A Geiger Mode APD Array

    DTIC Science & Technology

    2016-02-09

    spurious fires from numerous sources, including crosstalk from other detectors in the same array . Additionally, after a 9 successful detection, the...be combined into arrays with large numbers of detectors , allowing for scaling of dynamic range with relatively little overhead on space and power...overall higher rate of dark counts than a single detector , this is more than compensated for by the extra detectors . A sufficiently large APD array could

  6. A numerical study of the laminar necklace vortex system and its effect on the wake for a circular cylinder

    NASA Astrophysics Data System (ADS)

    Kirkil, Gokhan; Constantinescu, George

    2014-11-01

    Large Eddy Simulation is used to investigate the structure of the laminar horseshoe vortex (HV) system and the dynamics of the necklace vortices as they fold around the base of a circular cylinder mounted on the flat bed of an open channel for Reynolds numbers defined with the cylinder diameter, D, smaller than 4,460. The study concentrates on the analysis of the structure of the HV system in the periodic breakaway sub-regime which is characterized by the formation of three main necklace vortices. For the relatively shallow flow conditions considered in this study (H/D 1, H is the channel depth), at times, the disturbances induced by the legs of the necklace vortices do not allow the SSLs on the two sides of the cylinder to interact in a way that allows the vorticity redistribution mechanism to lead to the formation of a new wake roller. As a result, the shedding of large scale rollers in the turbulent wake is suppressed for relatively large periods of time. Simulation results show that the wake structure changes randomly between time intervals when large-scale rollers are forming and are convected in the wake (von Karman regime), and time intervals when the rollers do not form.

  7. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  8. Scale up of large ALON® and spinel windows

    NASA Astrophysics Data System (ADS)

    Goldman, Lee M.; Kashalikar, Uday; Ramisetty, Mohan; Jha, Santosh; Sastri, Suri

    2017-05-01

    Aluminum Oxynitride (ALON® Transparent Ceramic) and Magnesia Aluminate Spinel (Spinel) combine broadband transparency with excellent mechanical properties. Their cubic structure means that they are transparent in their polycrystalline form, allowing them to be manufactured by conventional powder processing techniques. Surmet has scaled up its ALON® production capability to produce and deliver windows as large as 4.4 sq ft. We have also produced our first 6 sq ft window. We are in the process of producing 7 sq ft ALON® window blanks for armor applications; and scale up to even larger, high optical quality blanks for Recce window applications is underway. Surmet also produces spinel for customers that require superior transmission at the longer wavelengths in the mid wave infra-red (MWIR). Spinel windows have been limited to smaller sizes than have been achieved with ALON. To date the largest spinel window produced is 11x18-in, and windows 14x20-in size are currently in process. Surmet is now scaling up its spinel processing capability to produce high quality window blanks as large as 19x27-in for sensor applications.

  9. Reconciling tensor and scalar observables in G-inflation

    NASA Astrophysics Data System (ADS)

    Ramírez, Héctor; Passaglia, Samuel; Motohashi, Hayato; Hu, Wayne; Mena, Olga

    2018-04-01

    The simple m2phi2 potential as an inflationary model is coming under increasing tension with limits on the tensor-to-scalar ratio r and measurements of the scalar spectral index ns. Cubic Galileon interactions in the context of the Horndeski action can potentially reconcile the observables. However, we show that this cannot be achieved with only a constant Galileon mass scale because the interactions turn off too slowly, leading also to gradient instabilities after inflation ends. Allowing for a more rapid transition can reconcile the observables but moderately breaks the slow-roll approximation leading to a relatively large and negative running of the tilt αs that can be of order ns‑1. We show that the observables on CMB and large scale structure scales can be predicted accurately using the optimized slow-roll approach instead of the traditional slow-roll expansion. Upper limits on |αs| place a lower bound of rgtrsim 0.005 and, conversely, a given r places a lower bound on |αs|, both of which are potentially observable with next generation CMB and large scale structure surveys.

  10. Applying Hillslope Hydrology to Bridge between Ecosystem and Grid-Scale Processes within an Earth System Model

    NASA Astrophysics Data System (ADS)

    Subin, Z. M.; Sulman, B. N.; Malyshev, S.; Shevliakova, E.

    2013-12-01

    Soil moisture is a crucial control on surface energy fluxes, vegetation properties, and soil carbon cycling. Its interactions with ecosystem processes are highly nonlinear across a large range, as both drought stress and anoxia can impede vegetation and microbial growth. Earth System Models (ESMs) generally only represent an average soil-moisture state in grid cells at scales of 50-200 km, and as a result are not able to adequately represent the effects of subgrid heterogeneity in soil moisture, especially in regions with large wetland areas. We addressed this deficiency by developing the first ESM-coupled subgrid hillslope-hydrological model, TiHy (Tiled-hillslope Hydrology), embedded within the Geophysical Fluid Dynamics Laboratory (GFDL) land model. In each grid cell, one or more representative hillslope geometries are discretized into land model tiles along an upland-to-lowland gradient. These geometries represent ~1 km hillslope-scale hydrological features and allow for flexible representation of hillslope profile and plan shapes, in addition to variation of subsurface properties among or within hillslopes. Each tile (which may represent ~100 m along the hillslope) has its own surface fluxes, vegetation state, and vertically-resolved state variables for soil physics and biogeochemistry. Resolution of water state in deep layers (~200 m) down to bedrock allows for physical integration of groundwater transport with unsaturated overlying dynamics. Multiple tiles can also co-exist at the same vertical position along the hillslope, allowing the simulation of ecosystem heterogeneity due to disturbance. The hydrological model is coupled to the vertically-resolved Carbon, Organisms, Respiration, and Protection in the Soil Environment (CORPSE) model, which captures non-linearity resulting from interactions between vertically-heterogeneous soil carbon and water profiles. We present comparisons of simulated water table depth to observations. We examine sensitivities to alternative parameterizations of hillslope geometry, macroporosity, and surface runoff / inundation, and to the choice of global topographic dataset and groundwater hydraulic conductivity distribution. Simulated groundwater dynamics among hillslopes tend to cluster into three regimes of wet and well-drained, wet but poorly-drained, and dry. In the base model configuration, near-surface gridcell-mean water tables exist in an excessively large area compared to observations, including large areas of the Eastern U.S. and Northern Europe. However, in better-drained areas, the decrease in water table depth along the hillslope gradient allows for realistic increases in ecosystem water availability and soil carbon downslope. The inclusion of subgrid hydrology can increase the equilibrium 0-2 m global soil carbon stock by a large factor, due to the nonlinear effect of anoxia. We conclude that this innovative modeling framework allows for the inclusion of hillslope-scale processes and the potential for wetland dynamics in an ESM without need for a high-resolution 3-dimensional groundwater model. Future work will include investigating the potential for future changes in land carbon fluxes caused by the effects of changing hydrological regime, particularly in peatland-rich areas poorly treated by current ESMs.

  11. Infrared Multiphoton Dissociation for Quantitative Shotgun Proteomics

    PubMed Central

    Ledvina, Aaron R.; Lee, M. Violet; McAlister, Graeme C.; Westphall, Michael S.; Coon, Joshua J.

    2012-01-01

    We modified a dual-cell linear ion trap mass spectrometer to perform infrared multiphoton dissociation (IRMPD) in the low pressure trap of a dual-cell quadrupole linear ion trap (dual cell QLT) and perform large-scale IRMPD analyses of complex peptide mixtures. Upon optimization of activation parameters (precursor q-value, irradiation time, and photon flux), IRMPD subtly, but significantly outperforms resonant excitation CAD for peptides identified at a 1% false-discovery rate (FDR) from a yeast tryptic digest (95% confidence, p = 0.019). We further demonstrate that IRMPD is compatible with the analysis of isobaric-tagged peptides. Using fixed QLT RF amplitude allows for the consistent retention of reporter ions, but necessitates the use of variable IRMPD irradiation times, dependent upon precursor mass-to-charge (m/z). We show that IRMPD activation parameters can be tuned to allow for effective peptide identification and quantitation simultaneously. We thus conclude that IRMPD performed in a dual-cell ion trap is an effective option for the large-scale analysis of both unmodified and isobaric-tagged peptides. PMID:22480380

  12. Simulation of FRET dyes allows quantitative comparison against experimental data

    NASA Astrophysics Data System (ADS)

    Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander

    2018-03-01

    Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.

  13. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  14. An ultrahigh vacuum fast-scanning and variable temperature scanning tunneling microscope for large scale imaging.

    PubMed

    Diaconescu, Bogdan; Nenchev, Georgi; de la Figuera, Juan; Pohl, Karsten

    2007-10-01

    We describe the design and performance of a fast-scanning, variable temperature scanning tunneling microscope (STM) operating from 80 to 700 K in ultrahigh vacuum (UHV), which routinely achieves large scale atomically resolved imaging of compact metallic surfaces. An efficient in-vacuum vibration isolation and cryogenic system allows for no external vibration isolation of the UHV chamber. The design of the sample holder and STM head permits imaging of the same nanometer-size area of the sample before and after sample preparation outside the STM base. Refractory metal samples are frequently annealed up to 2000 K and their cooldown time from room temperature to 80 K is 15 min. The vertical resolution of the instrument was found to be about 2 pm at room temperature. The coarse motor design allows both translation and rotation of the scanner tube. The total scanning area is about 8 x 8 microm(2). The sample temperature can be adjusted by a few tens of degrees while scanning over the same sample area.

  15. A family of dynamic models for large-eddy simulation

    NASA Technical Reports Server (NTRS)

    Carati, D.; Jansen, K.; Lund, T.

    1995-01-01

    Since its first application, the dynamic procedure has been recognized as an effective means to compute rather than prescribe the unknown coefficients that appear in a subgrid-scale model for Large-Eddy Simulation (LES). The dynamic procedure is usually used to determine the nondimensional coefficient in the Smagorinsky (1963) model. In reality the procedure is quite general and it is not limited to the Smagorinsky model by any theoretical or practical constraints. The purpose of this note is to consider a generalized family of dynamic eddy viscosity models that do not necessarily rely on the local equilibrium assumption built into the Smagorinsky model. By invoking an inertial range assumption, it will be shown that the coefficients in the new models need not be nondimensional. This additional degree of freedom allows the use of models that are scaled on traditionally unknown quantities such as the dissipation rate. In certain cases, the dynamic models with dimensional coefficients are simpler to implement, and allow for a 30% reduction in the number of required filtering operations.

  16. Characterization of spray-induced turbulence using fluorescence PIV

    NASA Astrophysics Data System (ADS)

    van der Voort, Dennis D.; Dam, Nico J.; Clercx, Herman J. H.; Water, Willem van de

    2018-07-01

    The strong shear induced by the injection of liquid sprays at high velocities induces turbulence in the surrounding medium. This, in turn, influences the motion of droplets as well as the mixing of air and vapor. Using fluorescence-based tracer particle image velocimetry, the velocity field surrounding 125-135 m/s sprays exiting a 200-μm nozzle is analyzed. For the first time, the small- and large-scale turbulence characteristics of the gas phase surrounding a spray has been measured simultaneously, using a large eddy model to determine the sub-grid scales. This further allows the calculation of the Stokes numbers of droplets, which indicates the influence of turbulence on their motion. The measurements lead to an estimate of the dissipation rate ɛ ≈ 35 m2 s^{-3}, a microscale Reynolds number Re_{λ } ≈ 170, and a Kolmogorov length scale of η ≈ 10^{-4} m. Using these dissipation rates to convert a droplet size distribution to a distribution of Stokes numbers, we show that only the large scale motion of turbulence disperses the droplet in the current case, but the small scales will grow in importance with increasing levels of atomization and ambient pressures.

  17. Large-scale synthesis of arrays of high-aspect-ratio rigid vertically aligned carbon nanofibres

    NASA Astrophysics Data System (ADS)

    Melechko, A. V.; McKnight, T. E.; Hensley, D. K.; Guillorn, M. A.; Borisevich, A. Y.; Merkulov, V. I.; Lowndes, D. H.; Simpson, M. L.

    2003-09-01

    We report on techniques for catalytic synthesis of rigid, high-aspect-ratio, vertically aligned carbon nanofibres by dc plasma enhanced chemical vapour deposition that are tailored for applications that require arrays of individual fibres that feature long fibre lengths (up to 20 µm) such as scanning probe microscopy, penetrant cell and tissue probing arrays and mechanical insertion approaches for gene delivery to cell cultures. We demonstrate that the definition of catalyst nanoparticles is the critical step that enables growth of individual, long-length fibres and discuss methods for catalyst particle preparation that allow the growth of individual isolated nanofibres from catalyst dots with diameters as large as 500 nm. This development enables photolithographic definition of catalyst and therefore the inexpensive, large-scale production of such arrays.

  18. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  19. Robust Coordination for Large Sets of Simple Rovers

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian

    2006-01-01

    The ability to coordinate sets of rovers in an unknown environment is critical to the long-term success of many of NASA;s exploration missions. Such coordination policies must have the ability to adapt in unmodeled or partially modeled domains and must be robust against environmental noise and rover failures. In addition such coordination policies must accommodate a large number of rovers, without excessive and burdensome hand-tuning. In this paper we present a distributed coordination method that addresses these issues in the domain of controlling a set of simple rovers. The application of these methods allows reliable and efficient robotic exploration in dangerous, dynamic, and previously unexplored domains. Most control policies for space missions are directly programmed by engineers or created through the use of planning tools, and are appropriate for single rover missions or missions requiring the coordination of a small number of rovers. Such methods typically require significant amounts of domain knowledge, and are difficult to scale to large numbers of rovers. The method described in this article aims to address cases where a large number of rovers need to coordinate to solve a complex time dependent problem in a noisy environment. In this approach, each rover decomposes a global utility, representing the overall goal of the system, into rover-specific utilities that properly assign credit to the rover s actions. Each rover then has the responsibility to create a control policy that maximizes its own rover-specific utility. We show a method of creating rover-utilities that are "aligned" with the global utility, such that when the rovers maximize their own utility, they also maximize the global utility. In addition we show that our method creates rover-utilities that allow the rovers to create their control policies quickly and reliably. Our distributed learning method allows large sets rovers be used unmodeled domains, while providing robustness against rover failures and changing environments. In experimental simulations we show that our method scales well with large numbers of rovers in addition to being robust against noisy sensor inputs and noisy servo control. The results show that our method is able to scale to large numbers of rovers and achieves up to 400% performance improvement over standard machine learning methods.

  20. An Extended Eddy-Diffusivity Mass-Flux Scheme for Unified Representation of Subgrid-Scale Turbulence and Convection

    NASA Astrophysics Data System (ADS)

    Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.

    2018-03-01

    Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.

  1. Spectral enstrophy budget in a shear-less flow with turbulent/non-turbulent interface

    NASA Astrophysics Data System (ADS)

    Cimarelli, Andrea; Cocconi, Giacomo; Frohnapfel, Bettina; De Angelis, Elisabetta

    2015-12-01

    A numerical analysis of the interaction between decaying shear free turbulence and quiescent fluid is performed by means of global statistical budgets of enstrophy, both, at the single-point and two point levels. The single-point enstrophy budget allows us to recognize three physically relevant layers: a bulk turbulent region, an inhomogeneous turbulent layer, and an interfacial layer. Within these layers, enstrophy is produced, transferred, and finally destroyed while leading to a propagation of the turbulent front. These processes do not only depend on the position in the flow field but are also strongly scale dependent. In order to tackle this multi-dimensional behaviour of enstrophy in the space of scales and in physical space, we analyse the spectral enstrophy budget equation. The picture consists of an inviscid spatial cascade of enstrophy from large to small scales parallel to the interface moving towards the interface. At the interface, this phenomenon breaks, leaving place to an anisotropic cascade where large scale structures exhibit only a cascade process normal to the interface thus reducing their thickness while retaining their lengths parallel to the interface. The observed behaviour could be relevant for both the theoretical and the modelling approaches to flow with interacting turbulent/nonturbulent regions. The scale properties of the turbulent propagation mechanisms highlight that the inviscid turbulent transport is a large-scale phenomenon. On the contrary, the viscous diffusion, commonly associated with small scale mechanisms, highlights a much richer physics involving small lengths, normal to the interface, but at the same time large scales, parallel to the interface.

  2. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    PubMed

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  3. Large-scale protein/antibody patterning with limiting unspecific adsorption

    NASA Astrophysics Data System (ADS)

    Fedorenko, Viktoriia; Bechelany, Mikhael; Janot, Jean-Marc; Smyntyna, Valentyn; Balme, Sebastien

    2017-10-01

    A simple synthetic route based on nanosphere lithography has been developed in order to design a large-scale nanoarray for specific control of protein anchoring. This technique based on two-dimensional (2D) colloidal crystals composed of polystyrene spheres allows the easy and inexpensive fabrication of large arrays (up to several centimeters) by reducing the cost. A silicon wafer coated with a thin adhesion layer of chromium (15 nm) and a layer of gold (50 nm) is used as a substrate. PS spheres are deposited on the gold surface using the floating-transferring technique. The PS spheres were then functionalized with PEG-biotin and the defects by self-assembly monolayer (SAM) PEG to prevent unspecific adsorption. Using epifluorescence microscopy, we show that after immersion of sample on target protein (avidin and anti-avidin) solution, the latter are specifically located on polystyrene spheres. Thus, these results are meaningful for exploration of devices based on a large-scale nanoarray of PS spheres and can be used for detection of target proteins or simply to pattern a surface with specific proteins.

  4. Panoptes: web-based exploration of large scale genome variation data.

    PubMed

    Vauterin, Paul; Jeffery, Ben; Miles, Alistair; Amato, Roberto; Hart, Lee; Wright, Ian; Kwiatkowski, Dominic

    2017-10-15

    The size and complexity of modern large-scale genome variation studies demand novel approaches for exploring and sharing the data. In order to unlock the potential of these data for a broad audience of scientists with various areas of expertise, a unified exploration framework is required that is accessible, coherent and user-friendly. Panoptes is an open-source software framework for collaborative visual exploration of large-scale genome variation data and associated metadata in a web browser. It relies on technology choices that allow it to operate in near real-time on very large datasets. It can be used to browse rich, hybrid content in a coherent way, and offers interactive visual analytics approaches to assist the exploration. We illustrate its application using genome variation data of Anopheles gambiae, Plasmodium falciparum and Plasmodium vivax. Freely available at https://github.com/cggh/panoptes, under the GNU Affero General Public License. paul.vauterin@gmail.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. A spatial picture of the synthetic large-scale motion from dynamic roughness

    NASA Astrophysics Data System (ADS)

    Huynh, David; McKeon, Beverley

    2017-11-01

    Jacobi and McKeon (2011) set up a dynamic roughness apparatus to excite a synthetic, travelling wave-like disturbance in a wind tunnel, boundary layer study. In the present work, this dynamic roughness has been adapted for a flat-plate, turbulent boundary layer experiment in a water tunnel. A key advantage of operating in water as opposed to air is the longer flow timescales. This makes accessible higher non-dimensional actuation frequencies and correspondingly shorter synthetic length scales, and is thus more amenable to particle image velocimetry. As a result, this experiment provides a novel spatial picture of the synthetic mode, the coupled small scales, and their streamwise development. It is demonstrated that varying the roughness actuation frequency allows for significant tuning of the streamwise wavelength of the synthetic mode, with a range of 3 δ-13 δ being achieved. Employing a phase-locked decomposition, spatial snapshots are constructed of the synthetic large scale and used to analyze its streamwise behavior. Direct spatial filtering is used to separate the synthetic large scale and the related small scales, and the results are compared to those obtained by temporal filtering that invokes Taylor's hypothesis. The support of AFOSR (Grant # FA9550-16-1-0361) is gratefully acknowledged.

  6. CLASS: The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; hide

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  7. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    PubMed

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  8. Viscous-enstrophy scaling law for Navier-Stokes reconnection

    NASA Astrophysics Data System (ADS)

    Kerr, Robert M.

    2017-11-01

    Simulations of perturbed, helical trefoil vortex knots and anti-parallel vortices find ν-independent collapse of temporally scaled (√{ ν} Z) - 1 / 2, Z enstrophy, between when the loops first touch at tΓ, and when reconnection ends at tx for the viscosity ν varying by 256. Due to mathematical bounds upon higher-order norms, this collapse requires that the domain increase as ν decreases, possibly to allow large-scale negative helicity to grow as compensation for small-scale positive helicity and enstrophy growth. This mechanism could be a step towards explaining how smooth solutions of the Navier-Stokes can generate finite-energy dissipation in a finite time as ν -> 0 .

  9. Morphological response of a large-scale coastal blowout to a strong magnitude transport event

    NASA Astrophysics Data System (ADS)

    Delgado-Fernandez, Irene; Jackson, Derek; Smith, Alexander; Smyth, Thomas

    2017-04-01

    Large-scale blowouts are fundamental features of many coastal dune fields in temperate areas around the world. These distinctive erosional (mostly unvegetated) landform features are often characterised by a significant depression area and a connected depositional lobe at their downwind edges. These areas also provide important transport corridors to inland parts of the dune system and can provide ideal habitats for specialist flora and fauna as well as helping to enhance landscape diversity. The actual morphology and shape/size of blowouts can significantly modify the overlying atmospheric boundary layer of the wind, influencing wind flow steering and intensity within the blowout, and ultimately aeolian sediment transport. While investigations of morphological changes within blowouts have largely focused on the medium (months) to long (annual/decadal) temporal scale, studies of aeolian transport dynamics within blowouts have predominantly focused on the short-term (event) scale. Work on wind-transport processes in blowouts is still relatively rare, with ad-hoc studies providing only limited information on airflow and aeolian transport. Large-scale blowouts are characterised by elongated basins that can reach hundreds of meters, potentially resulting in airflow and transport dynamics that are very different from their smaller scale counterparts. This research focuses on a short-term, strong wind event measured at the Devil's Hole blowout (Sefton dunes, NW England), a large-scale blowout feature approximately 300 m in length and 100 m in width. In situ measurements of airflow and aeolian transport were collected during a short-term experiment on the 22nd October 2015. A total of twenty three, 3D ultrasonic anemometers, sand traps, and wenglor sensors were deployed in a spatial grid covering the distal end of the basin, walls, and depositional lobe. Terrestrial laser scanning (TLS) was used to quantify morphological changes within the blowout before and after the strong magnitude transport event. This allowed, for the first time, examination of the morphological response as a direct result of a high energy wind event as it passes through a large-scale blowout. Results indicate strong steering and acceleration of the wind along the blowout basin and up the south wall opposite to the incident regional winds. These accelerated flows generated very strong transport rates of up to 3 g/s along the basin, and moderate strong transport rates of up to 1.5 g/s up the steep north wall. The coupling of high-frequency wind events and transport response together with topographic changes defined by TLS data allows, for the first time, the ability to co-connect the morphological evolution of a coastal blowout landform with the localised driving processes.

  10. "Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation

    ERIC Educational Resources Information Center

    Sangpetch, Akkarit

    2013-01-01

    Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…

  11. Soft collinear effective theory for heavy WIMP annihilation

    DOE PAGES

    Bauer, Martin; Cohen, Timothy; Hill, Richard J.; ...

    2015-01-19

    In a large class of models for Weakly Interacting Massive Particles (WIMPs), the WIMP mass M lies far above the weak scale m W . This work identifies universal Sudakov-type logarithms ~ α log 2(2 M/m W) that spoil the naive convergence of perturbation theory for annihilation processes. An effective field theory (EFT) framework is presented, allowing the systematic resummation of these logarithms. Another impact of the large separation of scales is that a long-distance wavefunction distortion from electroweak boson exchange leads to observable modifications of the cross section. Careful accounting of momentum regions in the EFT allows the rigorousmore » disentanglement of this so-called Sommerfeld enhancement from the short-distance hard annihilation process. In addition, the WIMP is described as a heavy-particle field, while the electroweak gauge bosons are treated as soft and collinear fields. Hard matching coefficients are computed at renormalization scale μ ~ 2 M , then evolved down to μ ~ m W , where electroweak symmetry breaking is incorporated and the matching onto the relevant quantum mechanical Hamiltonian is performed. The example of an SU(2) W triplet scalar dark matter candidate annihilating to line photons is used for concreteness, allowing the numerical exploration of the impact of next-to-leading order corrections and log resummation. As a result, for M ≃ 3 TeV, the resummed Sommerfeld enhanced cross section is reduced by a factor of ~ 3 with respect to the treelevel fixed order result.« less

  12. Cloud/climate sensitivity experiments

    NASA Technical Reports Server (NTRS)

    Roads, J. O.; Vallis, G. K.; Remer, L.

    1982-01-01

    A study of the relationships between large-scale cloud fields and large scale circulation patterns is presented. The basic tool is a multi-level numerical model comprising conservation equations for temperature, water vapor and cloud water and appropriate parameterizations for evaporation, condensation, precipitation and radiative feedbacks. Incorporating an equation for cloud water in a large-scale model is somewhat novel and allows the formation and advection of clouds to be treated explicitly. The model is run on a two-dimensional, vertical-horizontal grid with constant winds. It is shown that cloud cover increases with decreased eddy vertical velocity, decreased horizontal advection, decreased atmospheric temperature, increased surface temperature, and decreased precipitation efficiency. The cloud field is found to be well correlated with the relative humidity field except at the highest levels. When radiative feedbacks are incorporated and the temperature increased by increasing CO2 content, cloud amounts decrease at upper-levels or equivalently cloud top height falls. This reduces the temperature response, especially at upper levels, compared with an experiment in which cloud cover is fixed.

  13. Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase

    PubMed Central

    Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele

    2015-01-01

    Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins. PMID:25672826

  14. Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase

    NASA Astrophysics Data System (ADS)

    Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele

    2015-02-01

    Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins.

  15. What drives the formation of massive stars and clusters?

    NASA Astrophysics Data System (ADS)

    Ochsendorf, Bram; Meixner, Margaret; Roman-Duval, Julia; Evans, Neal J., II; Rahman, Mubdi; Zinnecker, Hans; Nayak, Omnarayani; Bally, John; Jones, Olivia C.; Indebetouw, Remy

    2018-01-01

    Galaxy-wide surveys allow to study star formation in unprecedented ways. In this talk, I will discuss our analysis of the Large Magellanic Cloud (LMC) and the Milky Way, and illustrate how studying both the large and small scale structure of galaxies are critical in addressing the question: what drives the formation of massive stars and clusters?I will show that ‘turbulence-regulated’ star formation models do not reproduce massive star formation properties of GMCs in the LMC and Milky Way: this suggests that theory currently does not capture the full complexity of star formation on small scales. I will also report on the discovery of a massive star forming complex in the LMC, which in many ways manifests itself as an embedded twin of 30 Doradus: this may shed light on the formation of R136 and 'Super Star Clusters' in general. Finally, I will highlight what we can expect in the next years in the field of star formation with large-scale sky surveys, ALMA, and our JWST-GTO program.

  16. Inquiry-Based Educational Design for Large-Scale High School Astronomy Projects Using Real Telescopes

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena

    2015-12-01

    In this paper, we outline the theory behind the educational design used to implement a large-scale high school astronomy education project. This design was created in response to the realization of ineffective educational design in the initial early stages of the project. The new design follows an iterative improvement model where the materials and general approach can evolve in response to solicited feedback. The improvement cycle concentrates on avoiding overly positive self-evaluation while addressing relevant external school and community factors while concentrating on backward mapping from clearly set goals. Limiting factors, including time, resources, support and the potential for failure in the classroom, are dealt with as much as possible in the large-scale design allowing teachers the best chance of successful implementation in their real-world classroom. The actual approach adopted following the principles of this design is also outlined, which has seen success in bringing real astronomical data and access to telescopes into the high school classroom.

  17. Regional reanalysis without local data: Exploiting the downscaling paradigm

    NASA Astrophysics Data System (ADS)

    von Storch, Hans; Feser, Frauke; Geyer, Beate; Klehmet, Katharina; Li, Delei; Rockel, Burkhardt; Schubert-Frisius, Martina; Tim, Nele; Zorita, Eduardo

    2017-08-01

    This paper demonstrates two important aspects of regional dynamical downscaling of multidecadal atmospheric reanalysis. First, that in this way skillful regional descriptions of multidecadal climate variability may be constructed in regions with little or no local data. Second, that the concept of large-scale constraining allows global downscaling, so that global reanalyses may be completed by additions of consistent detail in all regions of the world. Global reanalyses suffer from inhomogeneities. However, their large-scale componenst are mostly homogeneous; Therefore, the concept of downscaling may be applied to homogeneously complement the large-scale state of the reanalyses with regional detail—wherever the condition of homogeneity of the description of large scales is fulfilled. Technically, this can be done by dynamical downscaling using a regional or global climate model, which's large scales are constrained by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional weather risks—in particular marine risks—was identified. We have run this system in regions with reduced or absent local data coverage, such as Central Siberia, the Bohai and Yellow Sea, Southwestern Africa, and the South Atlantic. Also, a global simulation was computed, which adds regional features to prescribed global dynamics. Our cases demonstrate that spatially detailed reconstructions of the climate state and its change in the recent three to six decades add useful supplementary information to existing observational data for midlatitude and subtropical regions of the world.

  18. In-chip direct laser writing of a centimeter-scale acoustic micromixer

    NASA Astrophysics Data System (ADS)

    van't Oever, Jorick; Spannenburg, Niels; Offerhaus, Herman; van den Ende, Dirk; Herek, Jennifer; Mugele, Frieder

    2015-04-01

    A centimeter-scale micromixer was fabricated by two-photon polymerization inside a closed microchannel using direct laser writing. The structure consists of a repeating pattern of 20 μm×20 μm×155 μm acrylate pillars and extends over 1.2 cm. Using external ultrasonic actuation, the micropillars locally induce streaming with flow speeds of 30 μm s-1. The fabrication method allows for large flexibility and more complex designs.

  19. Blazing Signature Filter: a library for fast pairwise similarity comparisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon-Yong; Fujimoto, Grant M.; Wilson, Ryan

    Identifying similarities between datasets is a fundamental task in data mining and has become an integral part of modern scientific investigation. Whether the task is to identify co-expressed genes in large-scale expression surveys or to predict combinations of gene knockouts which would elicit a similar phenotype, the underlying computational task is often a multi-dimensional similarity test. As datasets continue to grow, improvements to the efficiency, sensitivity or specificity of such computation will have broad impacts as it allows scientists to more completely explore the wealth of scientific data. A significant practical drawback of large-scale data mining is the vast majoritymore » of pairwise comparisons are unlikely to be relevant, meaning that they do not share a signature of interest. It is therefore essential to efficiently identify these unproductive comparisons as rapidly as possible and exclude them from more time-intensive similarity calculations. The Blazing Signature Filter (BSF) is a highly efficient pairwise similarity algorithm which enables extensive data mining within a reasonable amount of time. The algorithm transforms datasets into binary metrics, allowing it to utilize the computationally efficient bit operators and provide a coarse measure of similarity. As a result, the BSF can scale to high dimensionality and rapidly filter unproductive pairwise comparison. Two bioinformatics applications of the tool are presented to demonstrate the ability to scale to billions of pairwise comparisons and the usefulness of this approach.« less

  20. Smectic viral capsids and the aneurysm instability

    NASA Astrophysics Data System (ADS)

    Dharmavaram, S.; Rudnick, J.; Lawrence, C. M.; Bruinsma, R. F.

    2018-05-01

    The capsids of certain Archaea-infecting viruses undergo large shape changes, while maintaining their integrity against rupture by osmotic pressure. We propose that these capsids are in a smectic liquid crystalline state, with the capsid proteins assembling along spirals. We show that smectic capsids are intrinsically stabilized against the formation of localized bulges with non-zero Gauss curvature while still allowing for large-scale cooperative shape transformation that involves global changes in the Gauss curvature.

  1. Making methane visible

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Olofsson, Göran; Crill, Patrick; Bastviken, David

    2016-04-01

    Methane (CH4) is one of the most important greenhouse gases, and an important energy carrier in biogas and natural gas. Its large-scale emission patterns have been unpredictable and the source and sink distributions are poorly constrained. Remote assessment of CH4 with high sensitivity at a m2 spatial resolution would allow detailed mapping of the near-ground distribution and anthropogenic sources in landscapes but has hitherto not been possible. Here we show that CH4 gradients can be imaged on the

  2. Modified circular velocity law

    NASA Astrophysics Data System (ADS)

    Djeghloul, Nazim

    2018-05-01

    A modified circular velocity law is presented for a test body orbiting around a spherically symmetric mass. This law exhibits a distance scale parameter and allows to recover both usual Newtonian behaviour for lower distances and a constant velocity limit at large scale. Application to the Galaxy predicts the known behaviour and also leads to a galactic mass in accordance with the measured visible stellar mass so that additional dark matter inside the Galaxy can be avoided. It is also shown that this circular velocity law can be embedded in a geometrical description of spacetime within the standard general relativity framework upon relaxing the usual asymptotic flatness condition. This formulation allows to redefine the introduced Newtonian scale limit in term of the central mass exclusively. Moreover, a satisfactory answer to the galactic escape speed problem can be provided indicating the possibility that one can also get rid of dark matter halo outside the Galaxy.

  3. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design

    PubMed Central

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei

    2016-01-01

    Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  4. Materials for stem cell factories of the future

    NASA Astrophysics Data System (ADS)

    Celiz, Adam D.; Smith, James G. W.; Langer, Robert; Anderson, Daniel G.; Winkler, David A.; Barrett, David A.; Davies, Martyn C.; Young, Lorraine E.; Denning, Chris; Alexander, Morgan R.

    2014-06-01

    Polymeric substrates are being identified that could permit translation of human pluripotent stem cells from laboratory-based research to industrial-scale biomedicine. Well-defined materials are required to allow cell banking and to provide the raw material for reproducible differentiation into lineages for large-scale drug-screening programs and clinical use. Yet more than 1 billion cells for each patient are needed to replace losses during heart attack, multiple sclerosis and diabetes. Producing this number of cells is challenging, and a rethink of the current predominant cell-derived substrates is needed to provide technology that can be scaled to meet the needs of millions of patients a year. In this Review, we consider the role of materials discovery, an emerging area of materials chemistry that is in large part driven by the challenges posed by biologists to materials scientists.

  5. Engineering of Baeyer-Villiger monooxygenase-based Escherichia coli biocatalyst for large scale biotransformation of ricinoleic acid into (Z)-11-(heptanoyloxy)undec-9-enoic acid

    PubMed Central

    Seo, Joo-Hyun; Kim, Hwan-Hee; Jeon, Eun-Yeong; Song, Young-Ha; Shin, Chul-Soo; Park, Jin-Byung

    2016-01-01

    Baeyer-Villiger monooxygenases (BVMOs) are able to catalyze regiospecific Baeyer-Villiger oxygenation of a variety of cyclic and linear ketones to generate the corresponding lactones and esters, respectively. However, the enzymes are usually difficult to express in a functional form in microbial cells and are rather unstable under process conditions hindering their large-scale applications. Thereby, we investigated engineering of the BVMO from Pseudomonas putida KT2440 and the gene expression system to improve its activity and stability for large-scale biotransformation of ricinoleic acid (1) into the ester (i.e., (Z)-11-(heptanoyloxy)undec-9-enoic acid) (3), which can be hydrolyzed into 11-hydroxyundec-9-enoic acid (5) (i.e., a precursor of polyamide-11) and n-heptanoic acid (4). The polyionic tag-based fusion engineering of the BVMO and the use of a synthetic promoter for constitutive enzyme expression allowed the recombinant Escherichia coli expressing the BVMO and the secondary alcohol dehydrogenase of Micrococcus luteus to produce the ester (3) to 85 mM (26.6 g/L) within 5 h. The 5 L scale biotransformation process was then successfully scaled up to a 70 L bioreactor; 3 was produced to over 70 mM (21.9 g/L) in the culture medium 6 h after biotransformation. This study demonstrated that the BVMO-based whole-cell reactions can be applied for large-scale biotransformations. PMID:27311560

  6. Tuneable diode laser gas analyser for methane measurements on a large scale solid oxide fuel cell

    NASA Astrophysics Data System (ADS)

    Lengden, Michael; Cunningham, Robert; Johnstone, Walter

    2011-10-01

    A new in-line, real time gas analyser is described that uses tuneable diode laser spectroscopy (TDLS) for the measurement of methane in solid oxide fuel cells. The sensor has been tested on an operating solid oxide fuel cell (SOFC) in order to prove the fast response and accuracy of the technology as compared to a gas chromatograph. The advantages of using a TDLS system for process control in a large-scale, distributed power SOFC unit are described. In future work, the addition of new laser sources and wavelength modulation will allow the simultaneous measurement of methane, water vapour, carbon-dioxide and carbon-monoxide concentrations.

  7. Four-center bubbled BPS solutions with a Gibbons-Hawking base

    NASA Astrophysics Data System (ADS)

    Heidmann, Pierre

    2017-10-01

    We construct four-center bubbled BPS solutions with a Gibbons-Hawking base space. We give a systematic procedure to build scaling solutions: starting from three-supertube configurations and using generalized spectral flows and gauge transformations to extend to solutions with four Gibbons-Hawking centers. This allows us to construct very large families of smooth horizonless solutions that have the same charges and angular momentum as supersymmetric black holes with a macroscopically large horizon area. Our construction reveals that all scaling solutions with four Gibbons Hawking centers have an angular momentum at around 99% of the cosmic censorship bound. We give both an analytical and a numerical explanation for this unexpected feature.

  8. Large-scale Growth and Simultaneous Doping of Molybdenum Disulfide Nanosheets

    PubMed Central

    Kim, Seong Jun; Kang, Min-A; Kim, Sung Ho; Lee, Youngbum; Song, Wooseok; Myung, Sung; Lee, Sun Sook; Lim, Jongsun; An, Ki-Seok

    2016-01-01

    A facile method that uses chemical vapor deposition (CVD) for the simultaneous growth and doping of large-scale molybdenum disulfide (MoS2) nanosheets was developed. We employed metalloporphyrin as a seeding promoter layer for the uniform growth of MoS2 nanosheets. Here, a hybrid deposition system that combines thermal evaporation and atomic layer deposition (ALD) was utilized to prepare the promoter. The doping effect of the promoter was verified by X-ray photoelectron spectroscopy and Raman spectroscopy. In addition, the carrier density of the MoS2 nanosheets was manipulated by adjusting the thickness of the metalloporphyrin promoter layers, which allowed the electrical conductivity in MoS2 to be manipulated. PMID:27044862

  9. Large scale seismic vulnerability and risk evaluation of a masonry churches sample in the historical centre of Naples

    NASA Astrophysics Data System (ADS)

    Formisano, Antonio; Ciccone, Giuseppe; Mele, Annalisa

    2017-11-01

    This paper investigates about the seismic vulnerability and risk of fifteen masonry churches located in the historical centre of Naples. The used analysis method is derived from a procedure already implemented by the University of Basilicata on the churches of Matera. In order to evaluate for the study area the seismic vulnerability and hazard indexes of selected churches, the use of appropriate technical survey forms is done. Data obtained from applying the employed procedure allow for both plotting of vulnerability maps and providing seismic risk indicators of all churches. The comparison among the indexes achieved allows for the evaluation of the health state of inspected churches so to program a priority scale in performing future retrofitting interventions.

  10. Utilizing Online Training for Child Sexual Abuse Prevention: Benefits and Limitations

    ERIC Educational Resources Information Center

    Paranal, Rechelle; Thomas, Kiona Washington; Derrick, Christina

    2012-01-01

    The prevalence of child sexual abuse demands innovative approaches to prevent further victimization. The online environment provides new opportunities to expand existing child sexual abuse prevention trainings that target adult gatekeepers and allow for large scale interventions that are fiscally viable. This article discusses the benefits and…

  11. Identifying Country-Specific Cultures of Physics Education: A Differential Item Functioning Approach

    ERIC Educational Resources Information Center

    Mesic, Vanes

    2012-01-01

    In international large-scale assessments of educational outcomes, student achievement is often represented by unidimensional constructs. This approach allows for drawing general conclusions about country rankings with respect to the given achievement measure, but it typically does not provide specific diagnostic information which is necessary for…

  12. Longitudinal Multistage Testing

    ERIC Educational Resources Information Center

    Pohl, Steffi

    2013-01-01

    This article introduces longitudinal multistage testing (lMST), a special form of multistage testing (MST), as a method for adaptive testing in longitudinal large-scale studies. In lMST designs, test forms of different difficulty levels are used, whereas the values on a pretest determine the routing to these test forms. Since lMST allows for…

  13. A simple transferable adaptive potential to study phase separation in large-scale xMgO-(1-x)SiO2 binary glasses.

    PubMed

    Bidault, Xavier; Chaussedent, Stéphane; Blanc, Wilfried

    2015-10-21

    A simple transferable adaptive model is developed and it allows for the first time to simulate by molecular dynamics the separation of large phases in the MgO-SiO2 binary system, as experimentally observed and as predicted by the phase diagram, meaning that separated phases have various compositions. This is a real improvement over fixed-charge models, which are often limited to an interpretation involving the formation of pure clusters, or involving the modified random network model. Our adaptive model, efficient to reproduce known crystalline and glassy structures, allows us to track the formation of large amorphous Mg-rich Si-poor nanoparticles in an Mg-poor Si-rich matrix from a 0.1MgO-0.9SiO2 melt.

  14. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  15. Lagrangian and Eulerian statistics obtained from direct numerical simulations of homogeneous turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle D.; Eaton, John K.

    1991-01-01

    Direct numerical simulation is used to study dispersion in decaying isotropic turbulence and homogeneous shear flow. Both Lagrangian and Eulerian data are presented allowing direct comparison, but at fairly low Reynolds number. The quantities presented include properties of the dispersion tensor, isoprobability contours of particle displacement, Lagrangian and Eulerian velocity autocorrelations and time scale ratios, and the eddy diffusivity tensor. The Lagrangian time microscale is found to be consistently larger than the Eulerian microscale, presumably due to the advection of the small scales by the large scales in the Eulerian reference frame.

  16. Large-scale road safety programmes in low- and middle-income countries: an opportunity to generate evidence.

    PubMed

    Hyder, Adnan A; Allen, Katharine A; Peters, David H; Chandran, Aruna; Bishai, David

    2013-01-01

    The growing burden of road traffic injuries, which kill over 1.2 million people yearly, falls mostly on low- and middle-income countries (LMICs). Despite this, evidence generation on the effectiveness of road safety interventions in LMIC settings remains scarce. This paper explores a scientific approach for evaluating road safety programmes in LMICs and introduces such a road safety multi-country initiative, the Road Safety in 10 Countries Project (RS-10). By building on existing evaluation frameworks, we develop a scientific approach for evaluating large-scale road safety programmes in LMIC settings. This also draws on '13 lessons' of large-scale programme evaluation: defining the evaluation scope; selecting study sites; maintaining objectivity; developing an impact model; utilising multiple data sources; using multiple analytic techniques; maximising external validity; ensuring an appropriate time frame; the importance of flexibility and a stepwise approach; continuous monitoring; providing feedback to implementers, policy-makers; promoting the uptake of evaluation results; and understanding evaluation costs. The use of relatively new approaches for evaluation of real-world programmes allows for the production of relevant knowledge. The RS-10 project affords an important opportunity to scientifically test these approaches for a real-world, large-scale road safety evaluation and generate new knowledge for the field of road safety.

  17. The cavitation erosion of ultrasonic sonotrode during large-scale metallic casting: Experiment and simulation.

    PubMed

    Tian, Yang; Liu, Zhilin; Li, Xiaoqian; Zhang, Lihua; Li, Ruiqing; Jiang, Ripeng; Dong, Fang

    2018-05-01

    Ultrasonic sonotrodes play an essential role in transmitting power ultrasound into the large-scale metallic casting. However, cavitation erosion considerably impairs the in-service performance of ultrasonic sonotrodes, leading to marginal microstructural refinement. In this work, the cavitation erosion behaviour of ultrasonic sonotrodes in large-scale castings was explored using the industry-level experiments of Al alloy cylindrical ingots (i.e. 630 mm in diameter and 6000 mm in length). When introducing power ultrasound, severe cavitation erosion was found to reproducibly occur at some specific positions on ultrasonic sonotrodes. However, there is no cavitation erosion present on the ultrasonic sonotrodes that were not driven by electric generator. Vibratory examination showed cavitation erosion depended on the vibration state of ultrasonic sonotrodes. Moreover, a finite element (FE) model was developed to simulate the evolution and distribution of acoustic pressure in 3-D solidification volume. FE simulation results confirmed that significant dynamic interaction between sonotrodes and melts only happened at some specific positions corresponding to severe cavitation erosion. This work will allow for developing more advanced ultrasonic sonotrodes with better cavitation erosion-resistance, in particular for large-scale castings, from the perspectives of ultrasonic physics and mechanical design. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Large-scale 3D inversion of marine controlled source electromagnetic data using the integral equation method

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.

    2009-12-01

    The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.

  19. Towards Simulating the Transverse Ising Model in a 2D Array of Trapped Ions

    NASA Astrophysics Data System (ADS)

    Sawyer, Brian

    2013-05-01

    Two-dimensional Coulomb crystals provide a useful platform for large-scale quantum simulation. Penning traps enable confinement of large numbers of ions (>100) and allow for the tunable-range spin-spin interactions demonstrated in linear ion strings, facilitating simulation of quantum magnetism at a scale that is currently intractable on classical computers. We readily confine hundreds of Doppler laser-cooled 9Be+ within a Penning trap, producing a planar array of ions with self-assembled triangular order. The transverse ``drumhead'' modes of our 2D crystal along with the valence electron spin of Be+ serve as a resource for generating spin-motion and spin-spin entanglement. Applying a spin-dependent optical dipole force (ODF) to the ion array, we perform spectroscopy and thermometry of individual drumhead modes. This ODF also allows us to engineer long-range Ising spin couplings of either ferromagnetic or anti-ferromagnetic character whose approximate power-law scaling with inter-ion distance, d, may be varied continuously from 1 /d0 to 1 /d3. An effective transverse magnetic field is applied via microwave radiation at the ~124-GHz spin-flip frequency, and ground states of the effective Ising Hamiltonian may in principle be prepared adiabatically by slowly decreasing this transverse field in the presence of the induced Ising coupling. Long-range anti-ferromagnetic interactions are of particular interest due to their inherent spin frustration and resulting large, near-degenerate manifold of ground states. We acknowledge support from NIST and the DARPA-OLE program.

  20. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    Over the past couple of years, with support from NASA, we used a large collection of data from GPS, VLBI, SLR, and DORIS networks which span the Western U.S. Cordillera (WUSC) to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work was roughly divided into an analysis of these space geodetic observations to infer the deformation field across and within the entire plate boundary zone, and an investigation of the implications of this deformation field regarding plate boundary dynamics. Following the determination of the first generation WUSC velocity solution, we placed high priority on the dissemination of the velocity estimates. With in-kind support from the Smithsonian Astrophysical Observatory, we constructed a web-site which allows anyone to access the data, and to determine their own velocity reference frame.

  1. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    Over the past couple of years, with support from NASA, we used a large collection of data from GPS, VLBI, SLR, and DORIS networks which span the Westem U.S. Cordillera (WUSC) to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work was roughly divided into an analysis of these space geodetic observations to infer the deformation field across and within the entire plate boundary zone, and an investigation of the implications of this deformation field regarding plate boundary dynamics. Following the determination of the first generation WUSC velocity solution, we placed high priority on the dissemination of the velocity estimates. With in-kind support from the Smithsonian Astrophysical Observatory, we constructed a web-site which allows anyone to access the data, and to determine their own velocity reference frame.

  2. The Department of Defense and the Power of Cloud Computing: Weighing Acceptable Cost Versus Acceptable Risk

    DTIC Science & Technology

    2016-04-01

    the DOD will put DOD systems and data at a risk level comparable to that of their neighbors in the cloud. Just as a user browses a Web page on the...proxy servers for controlling user access to Web pages, and large-scale storage for data management. Each of these devices allows access to the...user to develop applications. Acunetics.com describes Web applications as “computer programs allowing Website visitors to submit and retrieve data

  3. Design of an Airlift Bioreactor

    DOE Data Explorer

    Jiao, Yongqin; Park, Dan; Ho, Lewis

    2017-03-13

    An important consideration for the process design is cell immobilization-enabled flow-through operation. Large-scale biosorption relies on cells that are immobilized on a supporting substrate and used to 'attract' metal ions. Cell immobilization allows easy separation of the feed solution and REEs that are attached to the cell surface. It also allows continuous operation without the need of energy-intensive centrifugation or filtration. Lightweight, high surface area, low cost (~$200/m3) high-density polyethylene (HDPE) plastic disks are used as cell carriers for biofilm formation.

  4. News and views in discontinuous phase transitions

    NASA Astrophysics Data System (ADS)

    Nagler, Jan

    2014-03-01

    Recent progress in the theory of discontinuous percolation allow us to better understand the the sudden emergence of large-scale connectedness both in networked systems and on the lattice. We analytically study mechanisms for the amplification of critical fluctuations at the phase transition point, non-self-averaging and power law fluctuations. A single event analysis allow to establish criteria for discontinuous percolation transitions, even on the high-dimensional lattice. Some applications such as salad bowl percolation, and inverse fragmentation are discussed.

  5. Validation of large-scale, monochromatic UV disinfection systems for drinking water using dyed microspheres.

    PubMed

    Blatchley, E R; Shen, C; Scheible, O K; Robinson, J P; Ragheb, K; Bergstrom, D E; Rokjer, D

    2008-02-01

    Dyed microspheres have been developed as a new method for validation of ultraviolet (UV) reactor systems. When properly applied, dyed microspheres allow measurement of the UV dose distribution delivered by a photochemical reactor for a given operating condition. Prior to this research, dyed microspheres had only been applied to a bench-scale UV reactor. The goal of this research was to extend the application of dyed microspheres to large-scale reactors. Dyed microsphere tests were conducted on two prototype large-scale UV reactors at the UV Validation and Research Center of New York (UV Center) in Johnstown, NY. All microsphere tests were conducted under conditions that had been used previously in biodosimetry experiments involving two challenge bacteriophage: MS2 and Qbeta. Numerical simulations based on computational fluid dynamics and irradiance field modeling were also performed for the same set of operating conditions used in the microspheres assays. Microsphere tests on the first reactor illustrated difficulties in sample collection and discrimination of microspheres against ambient particles. Changes in sample collection and work-up were implemented in tests conducted on the second reactor that allowed for improvements in microsphere capture and discrimination against the background. Under these conditions, estimates of the UV dose distribution from the microspheres assay were consistent with numerical simulations and the results of biodosimetry, using both challenge organisms. The combined application of dyed microspheres, biodosimetry, and numerical simulation offers the potential to provide a more in-depth description of reactor performance than any of these methods individually, or in combination. This approach also has the potential to substantially reduce uncertainties in reactor validation, thereby leading to better understanding of reactor performance, improvements in reactor design, and decreases in reactor capital and operating costs.

  6. Statistical Analysis of Small-Scale Magnetic Flux Emergence Patterns: A Useful Subsurface Diagnostic?

    NASA Astrophysics Data System (ADS)

    Lamb, Derek A.

    2016-10-01

    While sunspots follow a well-defined pattern of emergence in space and time, small-scale flux emergence is assumed to occur randomly at all times in the quiet Sun. HMI's full-disk coverage, high cadence, spatial resolution, and duty cycle allow us to probe that basic assumption. Some case studies of emergence suggest that temporal clustering on spatial scales of 50-150 Mm may occur. If clustering is present, it could serve as a diagnostic of large-scale subsurface magnetic field structures. We present the results of a manual survey of small-scale flux emergence events over a short time period, and a statistical analysis addressing the question of whether these events show spatio-temporal behavior that is anything other than random.

  7. Large-scale data integration framework provides a comprehensive view on glioblastoma multiforme.

    PubMed

    Ovaska, Kristian; Laakso, Marko; Haapa-Paananen, Saija; Louhimo, Riku; Chen, Ping; Aittomäki, Viljami; Valo, Erkka; Núñez-Fontarnau, Javier; Rantanen, Ville; Karinen, Sirkku; Nousiainen, Kari; Lahesmaa-Korpinen, Anna-Maria; Miettinen, Minna; Saarinen, Lilli; Kohonen, Pekka; Wu, Jianmin; Westermarck, Jukka; Hautaniemi, Sampsa

    2010-09-07

    Coordinated efforts to collect large-scale data sets provide a basis for systems level understanding of complex diseases. In order to translate these fragmented and heterogeneous data sets into knowledge and medical benefits, advanced computational methods for data analysis, integration and visualization are needed. We introduce a novel data integration framework, Anduril, for translating fragmented large-scale data into testable predictions. The Anduril framework allows rapid integration of heterogeneous data with state-of-the-art computational methods and existing knowledge in bio-databases. Anduril automatically generates thorough summary reports and a website that shows the most relevant features of each gene at a glance, allows sorting of data based on different parameters, and provides direct links to more detailed data on genes, transcripts or genomic regions. Anduril is open-source; all methods and documentation are freely available. We have integrated multidimensional molecular and clinical data from 338 subjects having glioblastoma multiforme, one of the deadliest and most poorly understood cancers, using Anduril. The central objective of our approach is to identify genetic loci and genes that have significant survival effect. Our results suggest several novel genetic alterations linked to glioblastoma multiforme progression and, more specifically, reveal Moesin as a novel glioblastoma multiforme-associated gene that has a strong survival effect and whose depletion in vitro significantly inhibited cell proliferation. All analysis results are available as a comprehensive website. Our results demonstrate that integrated analysis and visualization of multidimensional and heterogeneous data by Anduril enables drawing conclusions on functional consequences of large-scale molecular data. Many of the identified genetic loci and genes having significant survival effect have not been reported earlier in the context of glioblastoma multiforme. Thus, in addition to generally applicable novel methodology, our results provide several glioblastoma multiforme candidate genes for further studies.Anduril is available at http://csbi.ltdk.helsinki.fi/anduril/The glioblastoma multiforme analysis results are available at http://csbi.ltdk.helsinki.fi/anduril/tcga-gbm/

  8. Energy transfer, pressure tensor, and heating of kinetic plasma

    NASA Astrophysics Data System (ADS)

    Yang, Yan; Matthaeus, William H.; Parashar, Tulasi N.; Haggerty, Colby C.; Roytershteyn, Vadim; Daughton, William; Wan, Minping; Shi, Yipeng; Chen, Shiyi

    2017-07-01

    Kinetic plasma turbulence cascade spans multiple scales ranging from macroscopic fluid flow to sub-electron scales. Mechanisms that dissipate large scale energy, terminate the inertial range cascade, and convert kinetic energy into heat are hotly debated. Here, we revisit these puzzles using fully kinetic simulation. By performing scale-dependent spatial filtering on the Vlasov equation, we extract information at prescribed scales and introduce several energy transfer functions. This approach allows highly inhomogeneous energy cascade to be quantified as it proceeds down to kinetic scales. The pressure work, - ( P . ∇ ) . u , can trigger a channel of the energy conversion between fluid flow and random motions, which contains a collision-free generalization of the viscous dissipation in collisional fluid. Both the energy transfer and the pressure work are strongly correlated with velocity gradients.

  9. Fundamental challenges to methane recovery from gas hydrates

    USGS Publications Warehouse

    Servio, P.; Eaton, M.W.; Mahajan, D.; Winters, W.J.

    2005-01-01

    The fundamental challenges, the location, magnitude, and feasibility of recovery, which must be addressed to recover methane from dispersed hydrate sources, are presented. To induce dissociation of gas hydrate prior to methane recovery, two potential methods are typically considered. Because thermal stimulation requires a large energy input, it is less economically feasible than depressurization. The new data will allow the study of the effect of pressure, temperature, diffusion, porosity, tortuosity, composition of gas and water, and porous media on gas-hydrate production. These data also will allow one to improve existing models related to the stability and dissociation of sea floor hydrates. The reproducible kinetic data from the planned runs together with sediment properties will aid in developing a process to economically recover methane from a potential untapped hydrate source. The availability of plentiful methane will allow economical and large-scale production of methane-derived clean fuels to help avert future energy crises.

  10. Dark energy and modified gravity in the Effective Field Theory of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo

    2018-04-01

    We develop an approach to compute observables beyond the linear regime of dark matter perturbations for general dark energy and modified gravity models. We do so by combining the Effective Field Theory of Dark Energy and Effective Field Theory of Large-Scale Structure approaches. In particular, we parametrize the linear and nonlinear effects of dark energy on dark matter clustering in terms of the Lagrangian terms introduced in a companion paper [1], focusing on Horndeski theories and assuming the quasi-static approximation. The Euler equation for dark matter is sourced, via the Newtonian potential, by new nonlinear vertices due to modified gravity and, as in the pure dark matter case, by the effects of short-scale physics in the form of the divergence of an effective stress tensor. The effective fluid introduces a counterterm in the solution to the matter continuity and Euler equations, which allows a controlled expansion of clustering statistics on mildly nonlinear scales. We use this setup to compute the one-loop dark-matter power spectrum.

  11. Large-Scale Biomonitoring of Remote and Threatened Ecosystems via High-Throughput Sequencing

    PubMed Central

    Gibson, Joel F.; Shokralla, Shadi; Curry, Colin; Baird, Donald J.; Monk, Wendy A.; King, Ian; Hajibabaei, Mehrdad

    2015-01-01

    Biodiversity metrics are critical for assessment and monitoring of ecosystems threatened by anthropogenic stressors. Existing sorting and identification methods are too expensive and labour-intensive to be scaled up to meet management needs. Alternately, a high-throughput DNA sequencing approach could be used to determine biodiversity metrics from bulk environmental samples collected as part of a large-scale biomonitoring program. Here we show that both morphological and DNA sequence-based analyses are suitable for recovery of individual taxonomic richness, estimation of proportional abundance, and calculation of biodiversity metrics using a set of 24 benthic samples collected in the Peace-Athabasca Delta region of Canada. The high-throughput sequencing approach was able to recover all metrics with a higher degree of taxonomic resolution than morphological analysis. The reduced cost and increased capacity of DNA sequence-based approaches will finally allow environmental monitoring programs to operate at the geographical and temporal scale required by industrial and regulatory end-users. PMID:26488407

  12. Cardiac Light-Sheet Fluorescent Microscopy for Multi-Scale and Rapid Imaging of Architecture and Function

    NASA Astrophysics Data System (ADS)

    Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.

    2016-03-01

    Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.

  13. Modelling disease outbreaks in realistic urban social networks

    NASA Astrophysics Data System (ADS)

    Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan

    2004-05-01

    Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.

  14. DEMNUni: massive neutrinos and the bispectrum of large scale structures

    NASA Astrophysics Data System (ADS)

    Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano

    2018-03-01

    The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.

  15. Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-03-01

    To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. For the purposes of this Guide, large-scale Federal renewable energy projects are defined as renewable energy facilities larger than 10 megawatts (MW) that are sited on Federal property and lands and typically financed and owned by third parties.1 The U.S. Department of Energy’s Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessarymore » private capital to complete them. This Guide is intended to provide a general resource that will begin to develop the Federal employee’s awareness and understanding of the project developer’s operating environment and the private sector’s awareness and understanding of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this Guide has been organized to match Federal processes with typical phases of commercial project development. FEMP collaborated with the National Renewable Energy Laboratory (NREL) and professional project developers on this Guide to ensure that Federal projects have key elements recognizable to private sector developers and investors. The main purpose of this Guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project. This framework begins the translation between the Federal and private sector operating environments. When viewing the overall« less

  16. Collecting verbal autopsies: improving and streamlining data collection processes using electronic tablets.

    PubMed

    Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas

    2018-02-01

    There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.

  17. Quantum error correction in crossbar architectures

    NASA Astrophysics Data System (ADS)

    Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie

    2018-07-01

    A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.

  18. PSI Wide Area Product (WAP) for measuring Ground Surface Displacements at regional level for multi-hazards studies

    NASA Astrophysics Data System (ADS)

    Duro, Javier; Iglesias, Rubén; Blanco, Pablo; Albiol, David; Koudogbo, Fifamè

    2015-04-01

    The Wide Area Product (WAP) is a new interferometric product developed to provide measurement over large regions. Persistent Scatterers Interferometry (PSI) has largely proved their robust and precise performance in measuring ground surface deformation in different application domains. In this context, however, the accurate displacement estimation over large-scale areas (more than 10.000 km2) characterized by low magnitude motion gradients (3-5 mm/year), such as the ones induced by inter-seismic or Earth tidal effects, still remains an open issue. The main reason for that is the inclusion of low quality and more distant persistent scatterers in order to bridge low-quality areas, such as water bodies, crop areas and forested regions. This fact yields to spatial propagation errors on PSI integration process, poor estimation and compensation of the Atmospheric Phase Screen (APS) and the difficult to face residual long-wavelength phase patterns originated by orbit state vectors inaccuracies. Research work for generating a Wide Area Product of ground motion in preparation for the Sentinel-1 mission has been conducted in the last stages of Terrafirma as well as in other research programs. These developments propose technological updates for keeping the precision over large scale PSI analysis. Some of the updates are based on the use of external information, like meteorological models, and the employment of GNSS data for an improved calibration of large measurements. Usually, covering wide regions implies the processing over areas with a land use which is chiefly focused on livestock, horticulture, urbanization and forest. This represents an important challenge for providing continuous InSAR measurements and the application of advanced phase filtering strategies to enhance the coherence. The advanced PSI processing has been performed out over several areas, allowing a large scale analysis of tectonic patterns, and motion caused by multi-hazards as volcanic, landslide and flood. Several examples of the application of the PSI WAP to wide regions for measuring ground displacements related to different types of hazards, natural and human induced will be presented. The InSAR processing approach to measure accurate movements at local and large scales for allowing multi-hazard interpretation studies will also be discussed. The test areas will show deformations related to active faults systems, landslides in mountains slopes, ground compaction over underneath aquifers and movements in volcanic areas.

  19. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  20. Renormalization-group flow of the effective action of cosmological large-scale structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos

    Following an approach of Matarrese and Pietroni, we derive the functional renormalization group (RG) flow of the effective action of cosmological large-scale structures. Perturbative solutions of this RG flow equation are shown to be consistent with standard cosmological perturbation theory. Non-perturbative approximate solutions can be obtained by truncating the a priori infinite set of possible effective actions to a finite subspace. Using for the truncated effective action a form dictated by dissipative fluid dynamics, we derive RG flow equations for the scale dependence of the effective viscosity and sound velocity of non-interacting dark matter, and we solve them numerically. Physically,more » the effective viscosity and sound velocity account for the interactions of long-wavelength fluctuations with the spectrum of smaller-scale perturbations. We find that the RG flow exhibits an attractor behaviour in the IR that significantly reduces the dependence of the effective viscosity and sound velocity on the input values at the UV scale. This allows for a self-contained computation of matter and velocity power spectra for which the sensitivity to UV modes is under control.« less

  1. Miniaturized integration of a fluorescence microscope

    PubMed Central

    Ghosh, Kunal K.; Burns, Laurie D.; Cocker, Eric D.; Nimmerjahn, Axel; Ziv, Yaniv; Gamal, Abbas El; Schnitzer, Mark J.

    2013-01-01

    The light microscope is traditionally an instrument of substantial size and expense. Its miniaturized integration would enable many new applications based on mass-producible, tiny microscopes. Key prospective usages include brain imaging in behaving animals towards relating cellular dynamics to animal behavior. Here we introduce a miniature (1.9 g) integrated fluorescence microscope made from mass-producible parts, including semiconductor light source and sensor. This device enables high-speed cellular-level imaging across ∼0.5 mm2 areas in active mice. This capability allowed concurrent tracking of Ca2+ spiking in >200 Purkinje neurons across nine cerebellar microzones. During mouse locomotion, individual microzones exhibited large-scale, synchronized Ca2+ spiking. This is a mesoscopic neural dynamic missed by prior techniques for studying the brain at other length scales. Overall, the integrated microscope is a potentially transformative technology that permits distribution to many animals and enables diverse usages, such as portable diagnostics or microscope arrays for large-scale screens. PMID:21909102

  2. The origin of density fluctuations in the 'new inflationary universe'

    NASA Technical Reports Server (NTRS)

    Turner, M. S.

    1983-01-01

    Cosmological mysteries which are not explained by the Big Bang hypothesis but may be approached by a revamped inflationary universe model are discussed. Attention is focused on the isotropy, the large-scale homogeneity, small-scale inhomogeneity, the oldness/flatness of the universe, and the baryon asymmetry. The universe is assumed to start in the lowest energy state, be initially dominated by false vacuum energy, enter a de Sitter phase, and then cross a barrier which is followed by the formation of fluctuation regions that lead to structure. The scalar fields (perturbation regions) experience quantum fluctuations which produce spontaneous symmetry breaking on a large scale. The scalar field value would need to be much greater than the expansion rate during the de Sitter epoch. A supersymmetric (flat) potential which satisfies the requirement, yields fluctuations of the right magnitude, and allows inflation to occur is described.

  3. Miniaturized integration of a fluorescence microscope.

    PubMed

    Ghosh, Kunal K; Burns, Laurie D; Cocker, Eric D; Nimmerjahn, Axel; Ziv, Yaniv; Gamal, Abbas El; Schnitzer, Mark J

    2011-09-11

    The light microscope is traditionally an instrument of substantial size and expense. Its miniaturized integration would enable many new applications based on mass-producible, tiny microscopes. Key prospective usages include brain imaging in behaving animals for relating cellular dynamics to animal behavior. Here we introduce a miniature (1.9 g) integrated fluorescence microscope made from mass-producible parts, including a semiconductor light source and sensor. This device enables high-speed cellular imaging across ∼0.5 mm2 areas in active mice. This capability allowed concurrent tracking of Ca2+ spiking in >200 Purkinje neurons across nine cerebellar microzones. During mouse locomotion, individual microzones exhibited large-scale, synchronized Ca2+ spiking. This is a mesoscopic neural dynamic missed by prior techniques for studying the brain at other length scales. Overall, the integrated microscope is a potentially transformative technology that permits distribution to many animals and enables diverse usages, such as portable diagnostics or microscope arrays for large-scale screens.

  4. Cosmic microwave background bispectrum from primordial magnetic fields on large angular scales.

    PubMed

    Seshadri, T R; Subramanian, Kandaswamy

    2009-08-21

    Primordial magnetic fields lead to non-Gaussian signals in the cosmic microwave background (CMB) even at the lowest order, as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. In contrast, CMB non-Gaussianity due to inflationary scalar perturbations arises only as a higher-order effect. We propose a novel probe of stochastic primordial magnetic fields that exploits the characteristic CMB non-Gaussianity that they induce. We compute the CMB bispectrum (b(l1l2l3)) induced by such fields on large angular scales. We find a typical value of l1(l1 + 1)l3(l3 + 1)b(l1l2l3) approximately 10(-22), for magnetic fields of strength B0 approximately 3 nG and with a nearly scale invariant magnetic spectrum. Observational limits on the bispectrum allow us to set upper limits on B0 approximately 35 nG.

  5. Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.

    PubMed

    Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B

    2012-06-01

    This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.

  6. Developing Regional Tephrostratigraphic Frameworks: Applications and Challenges.

    NASA Astrophysics Data System (ADS)

    Fontijn, K.; Pyle, D. M.; Smith, V.; Mather, T. A.

    2017-12-01

    Detailed stratigraphic studies of pyroclastic deposits form arguably the best tool to estimate the frequency and magnitude of explosive eruptions at volcanoes where limited or no historical records exist. As such tephrostratigraphy forms a first-order assessment of potential future eruptive behavior at poorly known volcanoes. Alternations of soils and pyroclastic deposits at proximal to medial distances of the volcano however typically only allow reconstructing eruptive behavior within the Holocene. Moreover, they only tend to preserve relatively large explosive eruptions, of magnitude 3-4 and above, and therefore almost invariably form a biased view of the frequency-magnitude relationships at a particular volcano. Long lacustrine records in medial to distal regions offer significant potential to obtain a more complete view of the explosive eruptive record as they often preserve thin fine-grained tephra deposits representing either small-scale explosive eruptions not preserved on land, or distal ash deposits from large explosive eruptions. Furthermore, these sedimentary records often contain material that can be dated to establish a detailed age-depth model that can be used to date the eruptions and estimate the tempo of activity. In settings where volcanoes and lakes closely co-exist, integrating terrestrial and lacustrine data therefore allows the development of regional-scale tephrostratigraphic frameworks. Such frameworks provide a view of temporal trends in volcanic activity and mid/long-term eruptive rates on a regional scale rather than at the level of an individual volcano, i.e. in interaction with regional tectonic stress regimes. They also highlight the spatial distribution of deposits from large explosive eruptions, allowing improved estimates of magnitudes of individual eruptions as well as of frequency of impact by volcanic ash in specific regions. Provided such tephra horizons are well characterized and dated they can be used as age marker horizons and help fine-tune age models for palaeoenvironmental studies. In this presentation we will highlight a few key examples of both local and regional-scale tephrostratigraphic frameworks in East Africa, Chile and South-East Asia, and discuss the multidisciplinary applications as well as challenges posed by data acquisition.

  7. Extracting Compositional Variation from THEMIS Data for Features with Large Topography on Mars Via Atmospheric Equalization

    NASA Technical Reports Server (NTRS)

    Anderson, F. S.; Drake, J. S.; Hamilton, V. E.

    2005-01-01

    We have developed a means of equalizing the atmospheric signature in Mars Odyssey Thermal Emission Imaging System (THEMIS) infrared data over regions with large topography such as the Valles Marineris (VM). This equalization allows for the analysis of compositional variations in regions that previously have been difficult to study because of the large differences in atmospheric path length that result from large changes in surface elevation. Specifically, our motivation for this study is to examine deposits that are small at the scales observable by the Thermal Emission Spectrometer (TES) onboard Mars Global Surveyor, but which are more readily resolved with THEMIS.

  8. Systematic effects of foreground removal in 21-cm surveys of reionization

    NASA Astrophysics Data System (ADS)

    Petrovic, Nada; Oh, S. Peng

    2011-05-01

    21-cm observations have the potential to revolutionize our understanding of the high-redshift Universe. Whilst extremely bright radio continuum foregrounds exist at these frequencies, their spectral smoothness can be exploited to allow efficient foreground subtraction. It is well known that - regardless of other instrumental effects - this removes power on scales comparable to the survey bandwidth. We investigate associated systematic biases. We show that removing line-of-sight fluctuations on large scales aliases into suppression of the 3D power spectrum across a broad range of scales. This bias can be dealt with by correctly marginalizing over small wavenumbers in the 1D power spectrum; however, the unbiased estimator will have unavoidably larger variance. We also show that Gaussian realizations of the power spectrum permit accurate and extremely rapid Monte Carlo simulations for error analysis; repeated realizations of the fully non-Gaussian field are unnecessary. We perform Monte Carlo maximum likelihood simulations of foreground removal which yield unbiased, minimum variance estimates of the power spectrum in agreement with Fisher matrix estimates. Foreground removal also distorts the 21-cm probability distribution function (PDF), reducing the contrast between neutral and ionized regions, with potentially serious consequences for efforts to extract information from the PDF. We show that it is the subtraction of large-scale modes which is responsible for this distortion, and that it is less severe in the earlier stages of reionization. It can be reduced by using larger bandwidths. In the late stages of reionization, identification of the largest ionized regions (which consist of foreground emission only) provides calibration points which potentially allow recovery of large-scale modes. Finally, we also show that (i) the broad frequency response of synchrotron and free-free emission will smear out any features in the electron momentum distribution and ensure spectrally smooth foregrounds and (ii) extragalactic radio recombination lines should be negligible foregrounds.

  9. Watershed Dynamics, with focus on connectivity index and management of water related impacts on road infrastructure

    NASA Astrophysics Data System (ADS)

    Kalantari, Z.

    2015-12-01

    In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. This study was built on a conceptual framework for looking at SedInConnect model, topography, land use, soil data and other PCDs and climate change in an integrated way to pave the way for more integrated policy making. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. This framework can provide a region with an effective tool to inform a broad range of watershed planning activities within a region. Regional planners, decision-makers, etc. can utilize this tool to identify the most vulnerable points in a watershed and along roads to plan for interventions and actions to alter impacts of high flows and other extreme weather events on roads construction. The application of the model over a large scale can give a realistic spatial characterization of sediment connectivity for the optimal management of debris flow to road structures. The ability of the model to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.

  10. Large-scale assessment of benthic communities across multiple marine protected areas using an autonomous underwater vehicle.

    PubMed

    Ferrari, Renata; Marzinelli, Ezequiel M; Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F; Byrne, Maria; Malcolm, Hamish A; Williams, Stefan B; Steinberg, Peter D

    2018-01-01

    Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate 'no-take' and 'general-use' (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5-10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales.

  11. Large-scale assessment of benthic communities across multiple marine protected areas using an autonomous underwater vehicle

    PubMed Central

    Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F.; Byrne, Maria; Malcolm, Hamish A.; Williams, Stefan B.; Steinberg, Peter D.

    2018-01-01

    Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate ‘no-take’ and ‘general-use’ (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5–10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales. PMID:29547656

  12. Development of a high-throughput SNP resource to advance genomic, genetic and breeding research in carrot (Daucus carota L.)

    USDA-ARS?s Scientific Manuscript database

    The rapid advancement in high-throughput SNP genotyping technologies along with next generation sequencing (NGS) platforms has decreased the cost, improved the quality of large-scale genome surveys, and allowed specialty crops with limited genomic resources such as carrot (Daucus carota) to access t...

  13. Educational Interventions for Children with ASD: A Systematic Literature Review 2008-2013

    ERIC Educational Resources Information Center

    Bond, Caroline; Symes, Wendy; Hebron, Judith; Humphrey, Neil; Morewood, Gareth; Woods, Kevin

    2016-01-01

    Systematic literature reviews can play a key role in underpinning evidence-based practice. To date, large-scale reviews of interventions for individuals with Autism Spectrum Disorder (ASD) have focused primarily on research quality. To assist practitioners, the current review adopted a broader framework which allowed for greater consideration of…

  14. Design Research on Personalized Problem Posing in Algebra

    ERIC Educational Resources Information Center

    Walkington, Candace

    2017-01-01

    Algebra is an area of pressing national concern around issues of equity and access in education. Recent theories and research suggest that personalization of instruction can allow students to activate their funds of knowledge and can elicit interest in the content to be learned. This paper examines the results of a large-scale teaching experiment…

  15. In Search of a Practice: Large-Scale Moderation in a Massive Online Community

    ERIC Educational Resources Information Center

    Pisa, Sheila Saden

    2013-01-01

    People are increasingly looking to online social communities as ways of communicating. However, even as participation in social networking is increasing, online communities often fail to coalesce. Noted success factors for online communities are linked to the community's purpose and culture. They are also related to structures that allow for…

  16. Virtual Environments Supporting Learning and Communication in Special Needs Education

    ERIC Educational Resources Information Center

    Cobb, Sue V. G.

    2007-01-01

    Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…

  17. INFN, IT the GENIUS grid portal and the robot certificates to perform phylogenetic analysis on large scale: a success story from the International LIBI project

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio

    This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.

  18. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    PubMed

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The leguminous species Anthyllis vulneraria as a Zn-hyperaccumulator and eco-Zn catalyst resources.

    PubMed

    Grison, Claire M; Mazel, Marine; Sellini, Amandine; Escande, Vincent; Biton, Jacques; Grison, Claude

    2015-04-01

    Anthyllis vulneraria was highlighted here as a Zn-hyperaccumulator for the development of a pilot phytoextraction process in the mine site of Les Avinières in the district of Saint-Laurent-Le-Minier. A. vulneraria appeared to hyperaccumulate the highest concentration of Zn in shoots with a better metal selectivity relative to Cd and Pb than the reference Zn-hyperaccumulator Noccea caerulescens. A bigger biomass production associated to a higher Zn concentration conducted A. vulneraria to the highest total zinc gain per hectare per year. As a legume, A. vulneraria was infected by rhizobia symbionts. Inoculation of A. vulneraria seeds showed a positive impact on Zn hyperaccumulation. A large-scale culture process of symbiotic rhizobia of A. vulneraria was investigated and optimized to allow large-scale inoculation process. Contaminated shoots of A. vulneraria were not considered as wastes and were recovered as Eco-Zn catalyst in particular, examples of organic synthesis, electrophilic aromatic substitution. Eco-Zn catalyst was much more efficient than conventional catalysts and allowed greener chemical processes.

  20. Viral Organization of Human Proteins

    PubMed Central

    Wuchty, Stefan; Siwo, Geoffrey; Ferdig, Michael T.

    2010-01-01

    Although maps of intracellular interactions are increasingly well characterized, little is known about large-scale maps of host-pathogen protein interactions. The investigation of host-pathogen interactions can reveal features of pathogenesis and provide a foundation for the development of drugs and disease prevention strategies. A compilation of experimentally verified interactions between HIV-1 and human proteins and a set of HIV-dependency factors (HDF) allowed insights into the topology and intricate interplay between viral and host proteins on a large scale. We found that targeted and HDF proteins appear predominantly in rich-clubs, groups of human proteins that are strongly intertwined among each other. These assemblies of proteins may serve as an infection gateway, allowing the virus to take control of the human host by reaching protein pathways and diversified cellular functions in a pronounced and focused way. Particular transcription factors and protein kinases facilitate indirect interactions between HDFs and viral proteins. Discerning the entanglement of directly targeted and indirectly interacting proteins may uncover molecular and functional sites that can provide novel perspectives on the progression of HIV infection and highlight new avenues to fight this virus. PMID:20827298

  1. Adaptive Texture Synthesis for Large Scale City Modeling

    NASA Astrophysics Data System (ADS)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  2. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE PAGES

    Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...

    2017-02-16

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  3. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.; Halsey, William; Dehoff, Ryan

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  4. HFSB-seeding for large-scale tomographic PIV in wind tunnels

    NASA Astrophysics Data System (ADS)

    Caridi, Giuseppe Carlo Alp; Ragni, Daniele; Sciacchitano, Andrea; Scarano, Fulvio

    2016-12-01

    A new system for large-scale tomographic particle image velocimetry in low-speed wind tunnels is presented. The system relies upon the use of sub-millimetre helium-filled soap bubbles as flow tracers, which scatter light with intensity several orders of magnitude higher than micron-sized droplets. With respect to a single bubble generator, the system increases the rate of bubbles emission by means of transient accumulation and rapid release. The governing parameters of the system are identified and discussed, namely the bubbles production rate, the accumulation and release times, the size of the bubble injector and its location with respect to the wind tunnel contraction. The relations between the above parameters, the resulting spatial concentration of tracers and measurement of dynamic spatial range are obtained and discussed. Large-scale experiments are carried out in a large low-speed wind tunnel with 2.85 × 2.85 m2 test section, where a vertical axis wind turbine of 1 m diameter is operated. Time-resolved tomographic PIV measurements are taken over a measurement volume of 40 × 20 × 15 cm3, allowing the quantitative analysis of the tip-vortex structure and dynamical evolution.

  5. A New 4D Imaging Method for Three-Phase Analogue Experiments in Volcanology (and Other Three-Phase Systems)

    NASA Astrophysics Data System (ADS)

    Oppenheimer, J.; Patel, K. B.; Lev, E.; Hillman, E. M. C.

    2017-12-01

    Bubbles and crystals suspended in magmas interact with each other on a small scale, which affects large-scale volcanic processes. Studying these interactions on relevant scales of time and space is a long-standing challenge. Therefore, the fundamental explanations for the behavior of bubble- and crystal-rich magmas are still largely speculative. Recent application of X-ray tomography to experiments with synthetic magmas has already improved our understanding of small-scale 4D (3D + time) phenomena. However, this technique has low imaging rates < 20 volumes per second (vps) and does not work well with analogues, making experiments costly and slow. We demonstrate a novel methodology for imaging bubble-particle interactions in analogue suspensions by utilizing Swept Confocally Aligned Planar Excitation (SCAPE) microscopy. This method based on laser-fluorescence has been used to image live biological processes at high speed and in 3D. It allows imaging rates of up to several hundred vps and image volumes up to 1 x 1 x 0.5 mm3, with a trade-off between speed and spatial resolution. We ran two sets of experiments with silicone oil and soda-lime glass beads of <50 µm diameter, contained within a vertical glass casing 50 x 5 x 4 mm3. We used two different bubble generation methods. In the first set of experiments, small air bubbles (< 1 mm) were introduced through a hole at the bottom of the sample and allowed to rise through a suspension with low-viscosity oil. We successfully imaged bubble rise and particle movements around the bubble. In the second set, bubbles were generated by mixing acetone into the suspension and decreasing the surface pressure to cause a phase change to gaseous acetone. This bubble generation method compared favorably with previous gum rosin-acetone experiments: they provided similar degassing behaviors, along with more control on suspension viscosity and optimal optical properties for laser transmission. Large volumes of suspended bubbles, however, interfered with the laser path. In this set, we were able to track bubble nucleation sites and nucleation rates in 4D. This promising technique allows the study of small-scale interactions in two- and three-phase systems, at high imaging rates and at low cost.

  6. Longitudinal aerodynamic characteristics of a large scale model with a swept wing and augmented jet flap in ground effect

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.

    1972-01-01

    The investigation of the in-ground-effect, longitudinal aerodynamic characteristics of a large scale swept augmentor wing model is presented, using 40 x 80 ft wind tunnel. The investigation was conducted at three ground heights; h/c equals 2.01, 1.61, and 1.34. The induced effect of underwing nacelles, was studied with two powered nacelle configurations. One configuration used four JT-15D turbofans while the other used two J-85 turbojet engines. Two conical nozzles on each J-85 were used to deflect the thrust at angles from 0 to 120 deg. Tests were also performed without nacelles to allow comparison with previous data from ground effect.

  7. Shaping carbon nanostructures by controlling the synthesis process

    NASA Astrophysics Data System (ADS)

    Merkulov, Vladimir I.; Guillorn, Michael A.; Lowndes, Douglas H.; Simpson, Michael L.; Voelkl, Edgar

    2001-08-01

    The ability to control the nanoscale shape of nanostructures in a large-scale synthesis process is an essential and elusive goal of nanotechnology research. Here, we report significant progress toward that goal. We have developed a technique that enables controlled synthesis of nanoscale carbon structures with conical and cylinder-on-cone shapes and provides the capability to dynamically change the nanostructure shape during the synthesis process. In addition, we present a phenomenological model that explains the formation of these nanostructures and provides insight into methods for precisely engineering their shape. Since the growth process we report is highly deterministic in allowing large-scale synthesis of precisely engineered nanoscale components at defined locations, our approach provides an important tool for a practical nanotechnology.

  8. Los Alamos Discovers Super Efficient Solar Using Perovskite Crystals

    ScienceCinema

    Mohite, Aditya; Nie, Wanyi

    2018-05-11

    State-of-the-art photovoltaics using high-purity, large-area, wafer-scale single-crystalline semiconductors grown by sophisticated, high temperature crystal-growth processes offer promising routes for developing low-cost, solar-based clean global energy solutions for the future. Solar cells composed of the recently discovered material organic-inorganic perovskites offer the efficiency of silicon, yet suffer from a variety of deficiencies limiting the commercial viability of perovskite photovoltaic technology. In research to appear in Science, Los Alamos National Laboratory researchers reveal a new solution-based hot-casting technique that eliminates these limitations, one that allows for the growth of high-quality, large-area, millimeter-scale perovskite crystals and demonstrates that highly efficient and reproducible solar cells with reduced trap assisted recombination can be realized.

  9. Predators, Prey and Habitat Structure: Can Key Conservation Areas and Early Signs of Population Collapse Be Detected in Neotropical Forests?

    PubMed

    de Thoisy, Benoit; Fayad, Ibrahim; Clément, Luc; Barrioz, Sébastien; Poirier, Eddy; Gond, Valéry

    2016-01-01

    Tropical forests with a low human population and absence of large-scale deforestation provide unique opportunities to study successful conservation strategies, which should be based on adequate monitoring tools. This study explored the conservation status of a large predator, the jaguar, considered an indicator of the maintenance of how well ecological processes are maintained. We implemented an original integrative approach, exploring successive ecosystem status proxies, from habitats and responses to threats of predators and their prey, to canopy structure and forest biomass. Niche modeling allowed identification of more suitable habitats, significantly related to canopy height and forest biomass. Capture/recapture methods showed that jaguar density was higher in habitats identified as more suitable by the niche model. Surveys of ungulates, large rodents and birds also showed higher density where jaguars were more abundant. Although jaguar density does not allow early detection of overall vertebrate community collapse, a decrease in the abundance of large terrestrial birds was noted as good first evidence of disturbance. The most promising tool comes from easily acquired LiDAR data and radar images: a decrease in canopy roughness was closely associated with the disturbance of forests and associated decreasing vertebrate biomass. This mixed approach, focusing on an apex predator, ecological modeling and remote-sensing information, not only helps detect early population declines in large mammals, but is also useful to discuss the relevance of large predators as indicators and the efficiency of conservation measures. It can also be easily extrapolated and adapted in a timely manner, since important open-source data are increasingly available and relevant for large-scale and real-time monitoring of biodiversity.

  10. Predators, Prey and Habitat Structure: Can Key Conservation Areas and Early Signs of Population Collapse Be Detected in Neotropical Forests?

    PubMed Central

    de Thoisy, Benoit; Fayad, Ibrahim; Clément, Luc; Barrioz, Sébastien; Poirier, Eddy; Gond, Valéry

    2016-01-01

    Tropical forests with a low human population and absence of large-scale deforestation provide unique opportunities to study successful conservation strategies, which should be based on adequate monitoring tools. This study explored the conservation status of a large predator, the jaguar, considered an indicator of the maintenance of how well ecological processes are maintained. We implemented an original integrative approach, exploring successive ecosystem status proxies, from habitats and responses to threats of predators and their prey, to canopy structure and forest biomass. Niche modeling allowed identification of more suitable habitats, significantly related to canopy height and forest biomass. Capture/recapture methods showed that jaguar density was higher in habitats identified as more suitable by the niche model. Surveys of ungulates, large rodents and birds also showed higher density where jaguars were more abundant. Although jaguar density does not allow early detection of overall vertebrate community collapse, a decrease in the abundance of large terrestrial birds was noted as good first evidence of disturbance. The most promising tool comes from easily acquired LiDAR data and radar images: a decrease in canopy roughness was closely associated with the disturbance of forests and associated decreasing vertebrate biomass. This mixed approach, focusing on an apex predator, ecological modeling and remote-sensing information, not only helps detect early population declines in large mammals, but is also useful to discuss the relevance of large predators as indicators and the efficiency of conservation measures. It can also be easily extrapolated and adapted in a timely manner, since important open-source data are increasingly available and relevant for large-scale and real-time monitoring of biodiversity. PMID:27828993

  11. SOURCE EXPLORER: Towards Web Browser Based Tools for Astronomical Source Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.

    2014-05-01

    As a new generation of large format, high-resolution imagers come online (ODI, DECAM, LSST, etc.) we are faced with the daunting prospect of astronomical images containing upwards of hundreds of thousands of identifiable sources. Visualizing and interacting with such large datasets using traditional astronomical tools appears to be unfeasible, and a new approach is required. We present here a method for the display and analysis of arbitrarily large source datasets using dynamically scaling levels of detail, enabling scientists to rapidly move from large-scale spatial overviews down to the level of individual sources and everything in-between. Based on the recognized standards of HTML5+JavaScript, we enable observers and archival users to interact with their images and sources from any modern computer without having to install specialized software. We demonstrate the ability to produce large-scale source lists from the images themselves, as well as overlaying data from publicly available source ( 2MASS, GALEX, SDSS, etc.) or user provided source lists. A high-availability cluster of computational nodes allows us to produce these source maps on demand and customized based on user input. User-generated source lists and maps are persistent across sessions and are available for further plotting, analysis, refinement, and culling.

  12. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    PubMed

    Indurkhya, Sagar; Beal, Jacob

    2010-01-06

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  13. Reaction Factoring and Bipartite Update Graphs Accelerate the Gillespie Algorithm for Large-Scale Biochemical Systems

    PubMed Central

    Indurkhya, Sagar; Beal, Jacob

    2010-01-01

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048

  14. Dynamics of partially folded and unfolded proteins investigated with quasielastic neutron spectroscopy

    NASA Astrophysics Data System (ADS)

    Stadler, Andreas M.

    2018-05-01

    Molecular dynamics in proteins animate and play a vital role for biologically relevant processes of these biomacromolecules. Quasielastic incoherent neutron scattering (QENS) is a well-suited experimental method to study protein dynamics from the picosecond to several nanoseconds and in the Ångström length-scale. In QENS experiments of protein solutions hydrogens act as reporters for the motions of methyl groups or amino acids to which they are bound. Neutron Spin-Echo spectroscopy (NSE) offers the highest energy resolution in the field of neutron spectroscopy and allows the study of slow collective motions in proteins up to several hundred nanoseconds and in the nanometer length-scale. In the following manuscript I will review recent studies that stress the relevance of molecular dynamics for protein folding and for conformational transitions of intrinsically disordered proteins (IDPs). During the folding collapse the protein is exploring its accessible conformational space via molecular motions. A large flexibility of partially folded and unfolded proteins, therefore, is mandatory for rapid protein folding. IDPs are a special case as they are largely unstructured under physiological conditions. A large flexibility is a characteristic property of IDPs as it allows, for example, the interaction with various binding partners or the rapid response to different conditions.

  15. Demonstration-Scale High-Cell-Density Fermentation of Pichia pastoris.

    PubMed

    Liu, Wan-Cang; Zhu, Ping

    2018-01-01

    Pichia pastoris has been one of the most successful heterologous overexpression systems in generating proteins for large-scale production through high-cell-density fermentation. However, optimizing conditions of the large-scale high-cell-density fermentation for biochemistry and industrialization is usually a laborious and time-consuming process. Furthermore, it is often difficult to produce authentic proteins in large quantities, which is a major obstacle for functional and structural features analysis and industrial application. For these reasons, we have developed a protocol for efficient demonstration-scale high-cell-density fermentation of P. pastoris, which employs a new methanol-feeding strategy-biomass-stat strategy and a strategy of increased air pressure instead of pure oxygen supplement. The protocol included three typical stages of glycerol batch fermentation (initial culture phase), glycerol fed-batch fermentation (biomass accumulation phase), and methanol fed-batch fermentation (induction phase), which allows direct online-monitoring of fermentation conditions, including broth pH, temperature, DO, anti-foam generation, and feeding of glycerol and methanol. Using this protocol, production of the recombinant β-xylosidase of Lentinula edodes origin in 1000-L scale fermentation can be up to ~900 mg/L or 9.4 mg/g cells (dry cell weight, intracellular expression), with the specific production rate and average specific production of 0.1 mg/g/h and 0.081 mg/g/h, respectively. The methodology described in this protocol can be easily transferred to other systems, and eligible to scale up for a large number of proteins used in either the scientific studies or commercial purposes.

  16. HOW THE DENSITY ENVIRONMENT CHANGES THE INFLUENCE OF THE DARK MATTER–BARYON STREAMING VELOCITY ON COSMOLOGICAL STRUCTURE FORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Kyungjin, E-mail: kjahn@chosun.ac.kr

    We study the dynamical effect of the relative velocity between dark matter and baryonic fluids, which remained supersonic after the epoch of recombination. The impact of this supersonic motion on the formation of cosmological structures was first formulated by Tseliakhovich and Hirata, in terms of the linear theory of small-scale fluctuations coupled to large-scale, relative velocities in mean-density regions. In their formalism, they limited the large-scale density environment to be that of the global mean density. We improve on their formulation by allowing variation in the density environment as well as the relative velocities. This leads to a new typemore » of coupling between large-scale and small-scale modes. We find that the small-scale fluctuation grows in a biased way: faster in the overdense environment and slower in the underdense environment. We also find that the net effect on the global power spectrum of the density fluctuation is to boost its overall amplitude from the prediction by Tseliakhovich and Hirata. Correspondingly, the conditional mass function of cosmological halos and the halo bias parameter are both affected in a similar way. The discrepancy between our prediction and that of Tseliakhovich and Hirata is significant, and therefore, the related cosmology and high-redshift astrophysics should be revisited. The mathematical formalism of this study can be used for generating cosmological initial conditions of small-scale perturbations in generic, overdense (underdense) background patches.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad Allen

    EDENx is a multivariate data visualization tool that allows interactive user driven analysis of large-scale data sets with high dimensionality. EDENx builds on our earlier system, called EDEN to enable analysis of more dimensions and larger scale data sets. EDENx provides an initial overview of summary statistics for each variable in the data set under investigation. EDENx allows the user to interact with graphical summary plots of the data to investigate subsets and their statistical associations. These plots include histograms, binned scatterplots, binned parallel coordinate plots, timeline plots, and graphical correlation indicators. From the EDENx interface, a user can selectmore » a subsample of interest and launch a more detailed data visualization via the EDEN system. EDENx is best suited for high-level, aggregate analysis tasks while EDEN is more appropriate for detail data investigations.« less

  18. A global numerical study of radon-222 and lead-210 in the atmosphere using the AES and York University CDT General Circulation Model (AYCG)

    NASA Technical Reports Server (NTRS)

    Beagley, Stephen R.; Degrandpre, Jean; Mcconnell, John C.; Laprise, Rene; Mcfarlane, Norman

    1994-01-01

    The Canadian Climate Center (CCC) GCM has been modified to allow its use for studies in atmospheric chemistry. The initial experiments reported here have been run to test and allow sensitivity studies of the new transport module. The impact of different types of parameterization for the convective mixing have been studied based on the large scale evolution of Rn-222 and Pb-210. Preliminary results have shown that the use of a scheme, which mixes unstable columns over a very short time scale, produces a global distribution of lead that agrees in some aspects with observations. The local impact of different mixing schemes on a short lived tracer like the radon is very important.

  19. BigView Image Viewing on Tiled Displays

    NASA Technical Reports Server (NTRS)

    Sandstrom, Timothy

    2007-01-01

    BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.

  20. PLANNING AND RESPONSE IN THE AFTERMATH OF A LARGE CRISIS: AN AGENT-BASED INFORMATICS FRAMEWORK*

    PubMed Central

    Barrett, Christopher; Bisset, Keith; Chandan, Shridhar; Chen, Jiangzhuo; Chungbaek, Youngyun; Eubank, Stephen; Evrenosoğlu, Yaman; Lewis, Bryan; Lum, Kristian; Marathe, Achla; Marathe, Madhav; Mortveit, Henning; Parikh, Nidhi; Phadke, Arun; Reed, Jeffrey; Rivers, Caitlin; Saha, Sudip; Stretz, Paula; Swarup, Samarth; Thorp, James; Vullikanti, Anil; Xie, Dawen

    2014-01-01

    We present a synthetic information and modeling environment that can allow policy makers to study various counter-factual experiments in the event of a large human-initiated crisis. The specific scenario we consider is a ground detonation caused by an improvised nuclear device in a large urban region. In contrast to earlier work in this area that focuses largely on the prompt effects on human health and injury, we focus on co-evolution of individual and collective behavior and its interaction with the differentially damaged infrastructure. This allows us to study short term secondary and tertiary effects. The present environment is suitable for studying the dynamical outcomes over a two week period after the initial blast. A novel computing and data processing architecture is described; the architecture allows us to represent multiple co-evolving infrastructures and social networks at a highly resolved temporal, spatial, and individual scale. The representation allows us to study the emergent behavior of individuals as well as specific strategies to reduce casualties and injuries that exploit the spatial and temporal nature of the secondary and tertiary effects. A number of important conclusions are obtained using the modeling environment. For example, the studies decisively show that deploying ad hoc communication networks to reach individuals in the affected area is likely to have a significant impact on the overall casualties and injuries. PMID:25580055

  1. PLANNING AND RESPONSE IN THE AFTERMATH OF A LARGE CRISIS: AN AGENT-BASED INFORMATICS FRAMEWORK*

    PubMed

    Barrett, Christopher; Bisset, Keith; Chandan, Shridhar; Chen, Jiangzhuo; Chungbaek, Youngyun; Eubank, Stephen; Evrenosoğlu, Yaman; Lewis, Bryan; Lum, Kristian; Marathe, Achla; Marathe, Madhav; Mortveit, Henning; Parikh, Nidhi; Phadke, Arun; Reed, Jeffrey; Rivers, Caitlin; Saha, Sudip; Stretz, Paula; Swarup, Samarth; Thorp, James; Vullikanti, Anil; Xie, Dawen

    2013-01-01

    We present a synthetic information and modeling environment that can allow policy makers to study various counter-factual experiments in the event of a large human-initiated crisis. The specific scenario we consider is a ground detonation caused by an improvised nuclear device in a large urban region. In contrast to earlier work in this area that focuses largely on the prompt effects on human health and injury, we focus on co-evolution of individual and collective behavior and its interaction with the differentially damaged infrastructure. This allows us to study short term secondary and tertiary effects. The present environment is suitable for studying the dynamical outcomes over a two week period after the initial blast. A novel computing and data processing architecture is described; the architecture allows us to represent multiple co-evolving infrastructures and social networks at a highly resolved temporal, spatial, and individual scale. The representation allows us to study the emergent behavior of individuals as well as specific strategies to reduce casualties and injuries that exploit the spatial and temporal nature of the secondary and tertiary effects. A number of important conclusions are obtained using the modeling environment. For example, the studies decisively show that deploying ad hoc communication networks to reach individuals in the affected area is likely to have a significant impact on the overall casualties and injuries.

  2. Utilizing the ultrasensitive Schistosoma up-converting phosphor lateral flow circulating anodic antigen (UCP-LF CAA) assay for sample pooling-strategies.

    PubMed

    Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J

    2017-11-01

    Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect sensitivity and specificity. It allows cost efficient stratified testing and monitoring of worm burden at the sub-population level, ideally for large-scale surveillance generating hard data for performance of MDA programs and strategic planning when moving towards transmission-stop and elimination.

  3. Large-scale geographic variation in distribution and abundance of Australian deep-water kelp forests.

    PubMed

    Marzinelli, Ezequiel M; Williams, Stefan B; Babcock, Russell C; Barrett, Neville S; Johnson, Craig R; Jordan, Alan; Kendrick, Gary A; Pizarro, Oscar R; Smale, Dan A; Steinberg, Peter D

    2015-01-01

    Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia's Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km) and depths (15-60 m) across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.

  4. Cloud-based MOTIFSIM: Detecting Similarity in Large DNA Motif Data Sets.

    PubMed

    Tran, Ngoc Tam L; Huang, Chun-Hsi

    2017-05-01

    We developed the cloud-based MOTIFSIM on Amazon Web Services (AWS) cloud. The tool is an extended version from our web-based tool version 2.0, which was developed based on a novel algorithm for detecting similarity in multiple DNA motif data sets. This cloud-based version further allows researchers to exploit the computing resources available from AWS to detect similarity in multiple large-scale DNA motif data sets resulting from the next-generation sequencing technology. The tool is highly scalable with expandable AWS.

  5. Radiative Natural Supersymmetry with Mixed Axion/Higgsino Cold Dark Matter

    NASA Astrophysics Data System (ADS)

    Baer, Howard

    Models of natural supersymmetry seek to solve the little hierarchy problem by positing a spectrum of light higgsinos ≲ 200 GeV and light top squarks ≲ 500 GeV along with very heavy squarks and TeV-scale gluinos. Such models have low electroweak finetuning and are safe from LHC searches. However, in the context of the MSSM, they predict too low a value of m h and the relic density of thermally produced higgsino-like WIMPs falls well below dark matter (DM) measurements. Allowing for high scale soft SUSY breaking Higgs mass m H u > m 0 leads to natural cancellations during RG running, and to radiatively induced low finetuning at the electroweak scale. This model of radiative natural SUSY (RNS), with large mixing in the top squark sector, allows for finetuning at the 5-10 % level with TeV-scale top squarks and a 125 GeV light Higgs scalar h. If the strong CP problem is solved via the PQ mechanism, then we expect an axion-higgsino admixture of dark matter, where either or both the DM particles might be directly detected.

  6. Radiative natural supersymmetry with mixed axion/higgsino cold dark matter

    NASA Astrophysics Data System (ADS)

    Baer, Howard

    2013-05-01

    Models of natural supersymmetry seek to solve the little hierarchy problem by positing a spectrum of light higgsinos <~ 200 GeV and light top squarks <~ 500 GeV along with very heavy squarks and TeV-scale gluinos. Such models have low electroweak finetuning and are safe from LHC searches. However, in the context of the MSSM, they predict too low a value of mh and the relic density of thermally produced higgsino-like WIMPs falls well below dark matter (DM) measurements. Allowing for high scale soft SUSY breaking Higgs mass mHu > m0 leads to natural cancellations during RG running, and to radiatively induced low finetuning at the electroweak scale. This model of radiative natural SUSY (RNS), with large mixing in the top squark sector, allows for finetuning at the 5-10% level with TeV-scale top squarks and a 125 GeV light Higgs scalar h. If the strong CP problem is solved via the PQ mechanism, then we expect an axion-higgsino admixture of dark matter, where either or both the DM particles might be directly detected.

  7. Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities

    NASA Astrophysics Data System (ADS)

    Doster, F.; Celia, M. A.; Nordbotten, J. M.

    2012-12-01

    Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.

  8. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    NASA Astrophysics Data System (ADS)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  9. The HI Content of Galaxies as a Function of Local Density and Large-Scale Environment

    NASA Astrophysics Data System (ADS)

    Thoreen, Henry; Cantwell, Kelly; Maloney, Erin; Cane, Thomas; Brough Morris, Theodore; Flory, Oscar; Raskin, Mark; Crone-Odekon, Mary; ALFALFA Team

    2017-01-01

    We examine the HI content of galaxies as a function of environment, based on a catalogue of 41527 galaxies that are part of the 70% complete Arecibo Legacy Fast-ALFA (ALFALFA) survey. We use nearest-neighbor methods to characterize local environment, and a modified version of the algorithm developed for the Galaxy and Mass Assembly (GAMA) survey to classify large-scale environment as group, filament, tendril, or void. We compare the HI content in these environments using statistics that include both HI detections and the upper limits on detections from ALFALFA. The large size of the sample allows to statistically compare the HI content in different environments for early-type galaxies as well as late-type galaxies. This work is supported by NSF grants AST-1211005 and AST-1637339, the Skidmore Faculty-Student Summer Research program, and the Schupf Scholars program.

  10. Covalent Binding with Neutrons on the Femto-scale

    NASA Astrophysics Data System (ADS)

    von Oertzen, W.; Kanada-En'yo, Y.; Kimura, M.

    2017-06-01

    In light nuclei we have well defined clusters, nuclei with closed shells, which serve as centers for binary molecules with covalent binding by valence neutrons. Single neutron orbitals in light neutron-excess nuclei have well defined shell model quantum numbers. With the combination of two clusters and their neutron valence states, molecular two-center orbitals are defined; in the two-center shell model we can place valence neutrons in a large variety of molecular two-center states, and the formation of Dimers becomes possible. The corresponding rotational bands point with their large moments of inertia and the Coriolis decoupling effect (for K = 1/2 bands) to the internal molecular orbital structure in these states. On the basis of these the neutron rich isotopes allow the formation of a large variety molecular structures on the nuclear scale. An extended Ikeda diagram can be drawn for these cases. Molecular bands in Be and Ne-isotopes are discussed as text-book examples.

  11. The graphene phonon dispersion with C12 and C13 isotopes

    NASA Astrophysics Data System (ADS)

    Whiteway, Eric; Bernard, Simon; Yu, Victor; Austing, D. Guy; Hilke, Michael

    2013-12-01

    Using very uniform large scale chemical vapor deposition grown graphene transferred onto silicon, we were able to identify 15 distinct Raman lines associated with graphene monolayers. This was possible thanks to a combination of different carbon isotopes and different Raman laser energies and extensive averaging without increasing the laser power. This allowed us to obtain a detailed experimental phonon dispersion relation for many points in the Brillouin zone. We further identified a D+D' peak corresponding to a double phonon process involving both an inter- and intra-valley phonon. In order to both eliminate substrate effects and to probe large areas, we undertook to study Raman scattering for large scale chemical vapor deposition (CVD) grown graphene using two different isotopes (C12 and C13) so that we can effectively exclude and subtract the substrate contributions, since a heavier mass downshifts only the vibrational properties, while keeping all other properties the same.

  12. Representation matters: quantitative behavioral variation in wild worm strains

    NASA Astrophysics Data System (ADS)

    Brown, Andre

    Natural genetic variation in populations is the basis of genome-wide association studies, an approach that has been applied in large studies of humans to study the genetic architecture of complex traits including disease risk. Of course, the traits you choose to measure determine which associated genes you discover (or miss). In large-scale human studies, the measured traits are usually taken as a given during the association step because they are expensive to collect and standardize. Working with the nematode worm C. elegans, we do not have the same constraints. In this talk I will describe how large-scale imaging of worm behavior allows us to develop alternative representations of behavior that vary differently across wild populations. The alternative representations yield novel traits that can be used for genome-wide association studies and may reveal basic properties of the genotype-phenotype map that are obscured if only a small set of fixed traits are used.

  13. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation

    PubMed Central

    Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho

    2014-01-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299

  14. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation.

    PubMed

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho

    2014-11-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.

  15. Structure analysis for hole-nuclei close to 132Sn by a large-scale shell-model calculation

    NASA Astrophysics Data System (ADS)

    Wang, Han-Kui; Sun, Yang; Jin, Hua; Kaneko, Kazunari; Tazaki, Shigeru

    2013-11-01

    The structure of neutron-rich nuclei with a few holes in respect of the doubly magic nucleus 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including orbitals allowing both neutron and proton core excitations, an effective interaction for the extended pairing-plus-quadrupole model with monopole corrections is tested through detailed comparison between the calculation and experimental data. By using the experimental energy of the core-excited 21/2+ level in 131In as a benchmark, monopole corrections are determined that describe the size of the neutron N=82 shell gap. The level spectra, up to 5 MeV of excitation in 131In, 131Sn, 130In, 130Cd, and 130Sn, are well described and clearly explained by couplings of single-hole orbitals and by core excitations.

  16. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  17. TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis

    PubMed Central

    Frazier, Zachary; Xu, Min; Alber, Frank

    2017-01-01

    SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576

  18. Habitat structure mediates biodiversity effects on ecosystem properties

    PubMed Central

    Godbold, J. A.; Bulling, M. T.; Solan, M.

    2011-01-01

    Much of what we know about the role of biodiversity in mediating ecosystem processes and function stems from manipulative experiments, which have largely been performed in isolated, homogeneous environments that do not incorporate habitat structure or allow natural community dynamics to develop. Here, we use a range of habitat configurations in a model marine benthic system to investigate the effects of species composition, resource heterogeneity and patch connectivity on ecosystem properties at both the patch (bioturbation intensity) and multi-patch (nutrient concentration) scale. We show that allowing fauna to move and preferentially select patches alters local species composition and density distributions, which has negative effects on ecosystem processes (bioturbation intensity) at the patch scale, but overall positive effects on ecosystem functioning (nutrient concentration) at the multi-patch scale. Our findings provide important evidence that community dynamics alter in response to localized resource heterogeneity and that these small-scale variations in habitat structure influence species contributions to ecosystem properties at larger scales. We conclude that habitat complexity forms an important buffer against disturbance and that contemporary estimates of the level of biodiversity required for maintaining future multi-functional systems may need to be revised. PMID:21227969

  19. Habitat structure mediates biodiversity effects on ecosystem properties.

    PubMed

    Godbold, J A; Bulling, M T; Solan, M

    2011-08-22

    Much of what we know about the role of biodiversity in mediating ecosystem processes and function stems from manipulative experiments, which have largely been performed in isolated, homogeneous environments that do not incorporate habitat structure or allow natural community dynamics to develop. Here, we use a range of habitat configurations in a model marine benthic system to investigate the effects of species composition, resource heterogeneity and patch connectivity on ecosystem properties at both the patch (bioturbation intensity) and multi-patch (nutrient concentration) scale. We show that allowing fauna to move and preferentially select patches alters local species composition and density distributions, which has negative effects on ecosystem processes (bioturbation intensity) at the patch scale, but overall positive effects on ecosystem functioning (nutrient concentration) at the multi-patch scale. Our findings provide important evidence that community dynamics alter in response to localized resource heterogeneity and that these small-scale variations in habitat structure influence species contributions to ecosystem properties at larger scales. We conclude that habitat complexity forms an important buffer against disturbance and that contemporary estimates of the level of biodiversity required for maintaining future multi-functional systems may need to be revised.

  20. Mapping the Energy Cascade in the North Atlantic Ocean: The Coarse-graining Approach

    DOE PAGES

    Aluie, Hussein; Hecht, Matthew; Vallis, Geoffrey K.

    2017-11-14

    A coarse-graining framework is implemented to analyze nonlinear processes, measure energy transfer rates and map out the energy pathways from simulated global ocean data. Traditional tools to measure the energy cascade from turbulence theory, such as spectral flux or spectral transfer rely on the assumption of statistical homogeneity, or at least a large separation between the scales of motion and the scales of statistical inhomogeneity. The coarse-graining framework allows for probing the fully nonlinear dynamics simultaneously in scale and in space, and is not restricted by those assumptions. This study describes how the framework can be applied to ocean flows.

  1. Mapping the Energy Cascade in the North Atlantic Ocean: The Coarse-graining Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aluie, Hussein; Hecht, Matthew; Vallis, Geoffrey K.

    A coarse-graining framework is implemented to analyze nonlinear processes, measure energy transfer rates and map out the energy pathways from simulated global ocean data. Traditional tools to measure the energy cascade from turbulence theory, such as spectral flux or spectral transfer rely on the assumption of statistical homogeneity, or at least a large separation between the scales of motion and the scales of statistical inhomogeneity. The coarse-graining framework allows for probing the fully nonlinear dynamics simultaneously in scale and in space, and is not restricted by those assumptions. This study describes how the framework can be applied to ocean flows.

  2. Evaluating large-scale health programmes at a district level in resource-limited countries.

    PubMed

    Svoronos, Theodore; Mate, Kedar S

    2011-11-01

    Recent experience in evaluating large-scale global health programmes has highlighted the need to consider contextual differences between sites implementing the same intervention. Traditional randomized controlled trials are ill-suited for this purpose, as they are designed to identify whether an intervention works, not how, when and why it works. In this paper we review several evaluation designs that attempt to account for contextual factors that contribute to intervention effectiveness. Using these designs as a base, we propose a set of principles that may help to capture information on context. Finally, we propose a tool, called a driver diagram, traditionally used in implementation that would allow evaluators to systematically monitor changing dynamics in project implementation and identify contextual variation across sites. We describe an implementation-related example from South Africa to underline the strengths of the tool. If used across multiple sites and multiple projects, the resulting driver diagrams could be pooled together to form a generalized theory for how, when and why a widely-used intervention works. Mechanisms similar to the driver diagram are urgently needed to complement existing evaluations of large-scale implementation efforts.

  3. Quantitative Large-Scale Three-Dimensional Imaging of Human Kidney Biopsies: A Bridge to Precision Medicine in Kidney Disease.

    PubMed

    Winfree, Seth; Dagher, Pierre C; Dunn, Kenneth W; Eadon, Michael T; Ferkowicz, Michael; Barwinska, Daria; Kelly, Katherine J; Sutton, Timothy A; El-Achkar, Tarek M

    2018-06-05

    Kidney biopsy remains the gold standard for uncovering the pathogenesis of acute and chronic kidney diseases. However, the ability to perform high resolution, quantitative, molecular and cellular interrogation of this precious tissue is still at a developing stage compared to other fields such as oncology. Here, we discuss recent advances in performing large-scale, three-dimensional (3D), multi-fluorescence imaging of kidney biopsies and quantitative analysis referred to as 3D tissue cytometry. This approach allows the accurate measurement of specific cell types and their spatial distribution in a thick section spanning the entire length of the biopsy. By uncovering specific disease signatures, including rare occurrences, and linking them to the biology in situ, this approach will enhance our understanding of disease pathogenesis. Furthermore, by providing accurate quantitation of cellular events, 3D cytometry may improve the accuracy of prognosticating the clinical course and response to therapy. Therefore, large-scale 3D imaging and cytometry of kidney biopsy is poised to become a bridge towards personalized medicine for patients with kidney disease. © 2018 S. Karger AG, Basel.

  4. Ocean Research Enabled by Underwater Gliders.

    PubMed

    Rudnick, Daniel L

    2016-01-01

    Underwater gliders are autonomous underwater vehicles that profile vertically by changing their buoyancy and use wings to move horizontally. Gliders are useful for sustained observation at relatively fine horizontal scales, especially to connect the coastal and open ocean. In this review, research topics are grouped by time and length scales. Large-scale topics addressed include the eastern and western boundary currents and the regional effects of climate variability. The accessibility of horizontal length scales of order 1 km allows investigation of mesoscale and submesoscale features such as fronts and eddies. Because the submesoscales dominate vertical fluxes in the ocean, gliders have found application in studies of biogeochemical processes. At the finest scales, gliders have been used to measure internal waves and turbulent dissipation. The review summarizes gliders' achievements to date and assesses their future in ocean observation.

  5. The Psychiatric Genomics Consortium Posttraumatic Stress Disorder Workgroup: Posttraumatic Stress Disorder Enters the Age of Large-Scale Genomic Collaboration

    PubMed Central

    Logue, Mark W; Amstadter, Ananda B; Baker, Dewleen G; Duncan, Laramie; Koenen, Karestan C; Liberzon, Israel; Miller, Mark W; Morey, Rajendra A; Nievergelt, Caroline M; Ressler, Kerry J; Smith, Alicia K; Smoller, Jordan W; Stein, Murray B; Sumner, Jennifer A; Uddin, Monica

    2015-01-01

    The development of posttraumatic stress disorder (PTSD) is influenced by genetic factors. Although there have been some replicated candidates, the identification of risk variants for PTSD has lagged behind genetic research of other psychiatric disorders such as schizophrenia, autism, and bipolar disorder. Psychiatric genetics has moved beyond examination of specific candidate genes in favor of the genome-wide association study (GWAS) strategy of very large numbers of samples, which allows for the discovery of previously unsuspected genes and molecular pathways. The successes of genetic studies of schizophrenia and bipolar disorder have been aided by the formation of a large-scale GWAS consortium: the Psychiatric Genomics Consortium (PGC). In contrast, only a handful of GWAS of PTSD have appeared in the literature to date. Here we describe the formation of a group dedicated to large-scale study of PTSD genetics: the PGC-PTSD. The PGC-PTSD faces challenges related to the contingency on trauma exposure and the large degree of ancestral genetic diversity within and across participating studies. Using the PGC analysis pipeline supplemented by analyses tailored to address these challenges, we anticipate that our first large-scale GWAS of PTSD will comprise over 10 000 cases and 30 000 trauma-exposed controls. Following in the footsteps of our PGC forerunners, this collaboration—of a scope that is unprecedented in the field of traumatic stress—will lead the search for replicable genetic associations and new insights into the biological underpinnings of PTSD. PMID:25904361

  6. The scientific targets of the SCOPE mission

    NASA Astrophysics Data System (ADS)

    Fujimoto, M.; Saito, Y.; Tsuda, Y.; Shinohara, I.; Kojima, H.

    Future Japanese magnetospheric mission "SCOPE" is now under study (planned to be launched in 2012). The main purpose of this mission is to investigate the dynamic behaviors of plasmas in the Earth's magnetosphere from the view-point of cross-scale coupling. Dynamical collisionless space plasma phenomena, be they large scale as a whole, are chracterized by coupling over various time and spatial scales. The best example would be the magnetic reconnection process, which is a large scale energy conversion process but has a small key region at the heart of its engine. Inside the key region, electron scale dynamics plays the key role in liberating the frozen-in constraint, by which reconnection is allowed to proceed. The SCOPE mission is composed of one large mother satellite and four small daughter satellites. The mother spacecraft will be equiped with the electron detector that has 10 msec time resolution so that scales down to the electron's will be resolved. Three of the four daughter satellites surround the mother satellite 3-dimensionally with the mutual distances between several km and several thousand km, which are varied during the mission. Plasma measurements on these spacecrafts will have 1 sec resolution and will provide information on meso-scale plasma structure. The fourth daughter satellite stays near the mother satellite with the distance less than 100km. By correlation between the two plasma wave instruments on the daughter and the mother spacecrafts, propagation of the waves and the information on the electron scale dynamics will be obtained. By this strategy, both meso- and micro-scale information on dynamics are obtained, that will enable us to investigate the physics of the space plasma from the cross-scale coupling point of view.

  7. Development of a Shipboard Remote Control and Telemetry Experimental System for Large-Scale Model’s Motions and Loads Measurement in Realistic Sea Waves

    PubMed Central

    Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe

    2017-01-01

    Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379

  8. Low-Cost and Scaled-Up Production of Fluorine-Free, Substrate-Independent, Large-Area Superhydrophobic Coatings Based on Hydroxyapatite Nanowire Bundles.

    PubMed

    Chen, Fei-Fei; Yang, Zi-Yue; Zhu, Ying-Jie; Xiong, Zhi-Chao; Dong, Li-Ying; Lu, Bing-Qiang; Wu, Jin; Yang, Ri-Long

    2018-01-09

    To date, the scaled-up production and large-area applications of superhydrophobic coatings are limited because of complicated procedures, environmentally harmful fluorinated compounds, restrictive substrates, expensive equipment, and raw materials usually involved in the fabrication process. Herein, the facile, low-cost, and green production of superhydrophobic coatings based on hydroxyapatite nanowire bundles (HNBs) is reported. Hydrophobic HNBs are synthesised by using a one-step solvothermal method with oleic acid as the structure-directing and hydrophobic agent. During the reaction process, highly hydrophobic C-H groups of oleic acid molecules can be attached in situ to the surface of HNBs through the chelate interaction between Ca 2+ ions and carboxylic groups. This facile synthetic method allows the scaled-up production of HNBs up to about 8 L, which is the largest production scale of superhydrophobic paint based on HNBs ever reported. In addition, the design of the 100 L reaction system is also shown. The HNBs can be coated on any substrate with an arbitrary shape by the spray-coating technique. The self-cleaning ability in air and oil, high-temperature stability, and excellent mechanical durability of the as-prepared superhydrophobic coatings are demonstrated. More importantly, the HNBs are coated on large-sized practical objects to form large-area superhydrophobic coatings. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. PILOT: optical performance and end-to-end characterisation

    NASA Astrophysics Data System (ADS)

    Longval, Y.; Misawa, R.; Ade, P.; André, Y.; de Bernardis, P.; Bousquet, F.; Bouzit, M.; Buttice, V.; Charra, M.; Crane, B.; Dubois, J. P.; Engel, C.; Griffin, M.; Hargrave, P.; Leriche, B.; Maestre, S.; Marty, C.; Marty, W.; Masi, S.; Mot, B.; Narbonne, J.; Pajot, F.; Pisano, G.; Ponthieu, N.; Ristorcelli, I.; Rodriguez, L.; Roudil, G.; Simonella, O.; Salatino, M.; Savini, G.; Tucker, C.; Bernard, J.-P.

    2017-11-01

    PILOT (Polarized Instrument for the Long-wavelength Observations of the Tenuous ISM), is a balloon-borne astronomy experiment dedicated to study the polarization of dust emission from the diffuse ISM in our Galaxy [1]. The observations of PILOT have two major scientific objectives. Firstly, they will allow us to constrain the large-scale geometry of the magnetic field in our Galaxy and to study in details the alignment properties of dust grains with respect to the magnetic field. In this domain, the measurements of PILOT will complement those of the Planck satellite at longer wavelengths. In particular, they will bring information at a better angular resolution, which is critical in crowded regions such as the Galactic plane. They will allow us to better understand how the magnetic field is shaping the ISM material on large scale in molecular clouds, and the role it plays in the gravitational collapse leading to star formation. Secondly, the PILOT observations will allow us to measure for the first time the polarized dust emission towards the most diffuse regions of the sky, where the measurements are the most easily interpreted in terms of the physics of dust. In this particular domain, PILOT will play a role for future CMB missions similar to that played by the Archeops experiment for Planck. The results of PILOT will allow us to gain knowledge about the magnetic properties of dust grains and about the structure of the magnetic field in the diffuse ISM that is necessary to a precise foreground subtraction in future polarized CMB measurements. The PILOT measurements, combined with those of Planck at longer wavelengths, will therefore allow us to further constrain the dust models. The outcome of such studies will likely impact the instrumental and technical choices for the future space missions dedicated to CMB polarization. The PILOT instrument will allow observations in two photometric channels at wavelengths 240 μm and 550 μm, with an angular resolution of a few arcminutes. We will make use of large format bolometer arrays, developed for the PACS instrument on board the Herschel satellite. With 1024 detectors per photometric channel and photometric band optimized for the measurement of dust emission, PILOT is likely to become the most sensitive experiment for this type of measurements. The PILOT experiment will take advantage of the large gain in sensitivity allowed by the use of large format, filled bolometer arrays at frequencies more favorable to the detection of dust emission. This paper presents the optical design, optical characterization and its performance. We begin with a presentation of the instrument and the optical system and then we summarise the main optical tests performed. In section III, we present preliminary end-to-end test results.

  10. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  11. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  12. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in space. We undertake a number of quality checks of the stochastic model and compare real and simulated footprints to show that the method is able to re-create realistic patterns even at continental scales where there is large variation in flood generating mechanisms. We then show how these patterns can be used to drive a large scale 2D hydraulic to predict regional scale flooding.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolis, Nadia; Albrecht, Andreas; Holman, R.

    We consider the effects of entanglement in the initial quantum state of scalar and tensor fluctuations during inflation. We allow the gauge-invariant scalar and tensor fluctuations to be entangled in the initial state and compute modifications to the various cosmological power spectra. We compute the angular power spectra (C{sub l}’s) for some specific cases of our entangled state and discuss what signals one might expect to find in CMB data. This entanglement also can break rotational invariance, allowing for the possibility that some of the large scale anomalies in the CMB power spectrum might be explained by this mechanism.

  14. Packaging of silicon photonic devices: from prototypes to production

    NASA Astrophysics Data System (ADS)

    Morrissey, Padraic E.; Gradkowski, Kamil; Carroll, Lee; O'Brien, Peter

    2018-02-01

    The challenges associated with the photonic packaging of silicon devices is often underestimated and remains technically challenging. In this paper, we review some key enabling technologies that will allow us to overcome the current bottleneck in silicon photonic packaging; while also describing the recent developments in standardisation, including the establishment of PIXAPP as the worlds first open-access PIC packaging and assembly Pilot Line. These developments will allow the community to move from low volume prototype photonic packaged devices to large scale volume manufacturing, where the full commercialisation of PIC technology can be realised.

  15. Instrumentation for studying binder burnout in an immobilized plutonium ceramic wasteform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, M; Pugh, D; Herman, C

    The Plutonium Immobilization Program produces a ceramic wasteform that utilizes organic binders. Several techniques and instruments were developed to study binder burnout on full size ceramic samples in a production environment. This approach provides a method for developing process parameters on production scale to optimize throughput, product quality, offgas behavior, and plant emissions. These instruments allow for offgas analysis, large-scale TGA, product quality observation, and thermal modeling. Using these tools, results from lab-scale techniques such as laser dilametry studies and traditional TGA/DTA analysis can be integrated. Often, the sintering step of a ceramification process is the limiting process step thatmore » controls the production throughput. Therefore, optimization of sintering behavior is important for overall process success. Furthermore, the capabilities of this instrumentation allows better understanding of plant emissions of key gases: volatile organic compounds (VOCs), volatile inorganics including some halide compounds, NO{sub x}, SO{sub x}, carbon dioxide, and carbon monoxide.« less

  16. Quantification of non-ideal explosion violence with a shock tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, Scott I; Hill, Larry G

    There is significant interest in quantifying the blast violence associated with various nonideal explosions. Such data is essential to evaluate the damage potential of both explosive cookoff and terrorist explosive scenarios. We present a technique designed to measure the source energy associated with a non-ideal, asymmetrical, and three-dimensional explosion. A tube is used to confine and focus energy from a blast event into a one-dimensional, quasi-planar shock front. During propagation along the length of the tube, the wave is allowed to shocksteepen into a more ideal form. Pressure transducers then measure the shock overpressure as a function of the distancemore » from the source. One-dimensional blast scaling theory allows calculation of the source energy from this data. This small-scale test method addresses cost and noise concerns as well as boosting and symmetry issues associated with large-scale, three-dimensional, blast arena tests. Results from both ideal explosives and non-ideal explosives are discussed.« less

  17. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data.

    PubMed

    Ikegami, Takashi; Mototake, Yoh-Ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-12-28

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  18. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data

    NASA Astrophysics Data System (ADS)

    Ikegami, Takashi; Mototake, Yoh-ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-11-01

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seljak, Uroš, E-mail: useljak@berkeley.edu

    On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less

  20. Paradigm change in ocean studies: multi-platform observing and forecasting integrated approach in response to science and society needs

    NASA Astrophysics Data System (ADS)

    Tintoré, Joaquín

    2017-04-01

    The last 20 years of ocean research have allowed a description of the state of the large-scale ocean circulation. However, it is also well known that there is no such thing as an ocean state and that the ocean varies a wide range of spatial and temporal scales. More recently, in the last 10 years, new monitoring and modelling technologies have emerged allowing quasi real time observation and forecasting of the ocean at regional and local scales. Theses new technologies are key components of recent observing & forecasting systems being progressively implemented in many regional seas and coastal areas of the world oceans. As a result, new capabilities to characterise the ocean state and more important, its variability at small spatial and temporal scales, exists today in many cases in quasi-real time. Examples of relevance for society can be cited, among others our capabilities to detect and understand long-term climatic changes and also our capabilities to better constrain our forecasting capabilities of the coastal ocean circulation at temporal scales from sub-seasonal to inter-annual and spatial from regional to meso and submesoscale. The Mediterranean Sea is a well-known laboratory ocean where meso and submesoscale features can be ideally observed and studied as shown by the key contributions from projects such as Perseus, CMEMS, Jericonext, among others. The challenge for the next 10 years is the integration of theses technologies and multiplatform observing and forecasting systems to (a) monitor the variability at small scales mesoscale/weeks) in order (b) to resolve the sub-basin/seasonal and inter-annual variability and by this (c) establish the decadal variability, understand the associated biases and correct them. In other words, the new observing systems now allow a major change in our focus of ocean observation, now from small to large scales. Recent studies from SOCIB -www.socib.es- have shown the importance of this new small to large-scale multi-platform approach in ocean observation. Three examples from the integration capabilities of SOCIB facilities will be presented and discussed. First the quasi-continuous high frequency glider monitoring of the Ibiza Channel since 2011, an important biodiversity hot spot and a 'choke' point in the Western Mediterranean circulation, has allowed us to reveal a high frequency variability in the North-South exchanges, with very significant changes (0.8 - 0.9 Sv) occurring over periods of days to week of the same order as the previously known seasonal cycle. HF radar data and model results have also contributed more recently to better describe and understand the variability at small scales. Second, the Alborex/Perseus project multi-platform experiment (e.g., RV catamaran, 2 gliders, 25 drifters, 3 Argo type profilers & satellite data) that focused on submesoscale processes and ecosystem response and carried out in the Alborán Sea in May 2014. Glider results showed significant chlorophyll subduction in areas adjacent to the steep density front with patterns related to vertical motion. Initial dynamical interpretations will be presented. Third and final, I will discuss the key relevance of the data centre to guarantee data interoperability, quality control, availability and distribution for this new approach to ocean observation and forecasting to be really efficient in responding to key scientific state of the art priorities, enhancing technology development and responding to society needs.

  1. Understanding the k-5/3 to k-2.4 spectral break in aircraft wind data

    NASA Astrophysics Data System (ADS)

    Pinel, J.; Lovejoy, S.; Schertzer, D. J.; Tuck, A.

    2010-12-01

    A fundamental issue in atmospheric dynamics is to understand how the statistics of fluctuations of various fields vary with their space-time scale. The classical - and still “standard” model - dates back to Kraichnan and Charney’s work on 2-D and geostrophic (quasi 2-D) turbulence at the end of the 1960’s and early 1970’s. It postulates an isotropic 2-D turbulent regime at large scales and an isotropic 3D regime at small scales separated by a “dimensional transition” (once called a “mesoscale gap”) near the pressure scale height of ≈10 km. By the early 1980’s a quite different model emerged, the 23/9-D scaling model in which the dynamics were postulated to be dominated (over wide scale ranges) by a strongly anisotropic scale invariant cascade mechanism with structures becoming flatter and flatter at larger and larger scales in a scaling manner: the isotropy assumptions were discarded but the scaling and cascade assumptions retained. Today, thanks to the revolution in geodata and atmospheric models - both in quality and quantity - the 23/9-D model can explain the observed horizontal cascade structures in remotely sensed radiances, in meteorological “reanalyses”, in meteorological models, in high resolution drop sonde vertical analyses, of lidar vertical sections etc. All of these analyses directly contradict the standard model which predicts drastic “dimensional transitions” for scalar quantities. Indeed, until recently the only unexplained feature was a scale break in aircraft spectra of the (vector) horizontal wind somewhere between about 40 and 200 km. However - contrary to repeated claims - and thanks to a reanalysis of the historical papers - the transition that had been observed since the 1980’s was not between k^-5/3 and k^-3 but rather between k^-5/3 and k^-2.4. By 2009, the standard model was thus hanging by a thread. This was cut when careful analysis of scientific aircraft data allowed the 23/9-D model to explain the large scale k-2.4 regime as an artefact of the aircraft following a sloping trajectory: at large enough scales, the spectrum is simply dominated by vertical rather than horizontal fluctuations which have the required k^-2.4 form. Since aircraft frequently follow gently sloping isobars, this neatly explains the last obstacle to wide range anisotropic scaling models finally opening the door to an urgently needed consensus on the statistical structure of the atmosphere. However, objections remain: at large enough scales do isobaric and isoheight spectra really have different exponents? In this presentation we attempted to study this issue in more detail than before by analyzed data measured by commercial aircrafts through the Tropospheric Airborne Meteorological Data Reporting (TAMDAR) system over CONUS during year 2009. The TAMDAR system allows us to calculate the statistical properties of the wind field on constant pressure and altitude levels. Various statistical exponents were calculated (velocity increment in terms of horizontal, vertical displacement, pressure and time) and we show here what we learned and how this analysis can help with solving this question.

  2. Beyond the standard Higgs after the 125 GeV Higgs discovery.

    PubMed

    Grojean, C

    2015-01-13

    An elementary weakly coupled and solitary Higgs boson allows one to extend the validity of the Standard Model up to very high energy, maybe as high as the Planck scale. Nonetheless, this scenario fails to fill the universe with dark matter and does not explain the matter-antimatter asymmetry. However, amending the Standard Model tends to destabilize the weak scale by large quantum corrections to the Higgs potential. New degrees of freedom, new forces, new organizing principles are required to provide a consistent and natural description of physics beyond the standard Higgs.

  3. Silicone elastomers capable of large isotropic dimensional change

    DOEpatents

    Lewicki, James; Worsley, Marcus A.

    2017-07-18

    Described herein is a highly effective route towards the controlled and isotropic reduction in size-scale, of complex 3D structures using silicone network polymer chemistry. In particular, a class of silicone structures were developed that once patterned and cured can `shrink` micron scale additive manufactured and lithographically patterned structures by as much as 1 order of magnitude while preserving the dimensions and integrity of these parts. This class of silicone materials is compatible with existing additive manufacture and soft lithographic fabrication processes and will allow access to a hitherto unobtainable dimensionality of fabrication.

  4. Beyond the standard Higgs after the 125 GeV Higgs discovery

    PubMed Central

    Grojean, C.

    2015-01-01

    An elementary, weakly coupled and solitary Higgs boson allows one to extend the validity of the Standard Model up to very high energy, maybe as high as the Planck scale. Nonetheless, this scenario fails to fill the universe with dark matter and does not explain the matter–antimatter asymmetry. However, amending the Standard Model tends to destabilize the weak scale by large quantum corrections to the Higgs potential. New degrees of freedom, new forces, new organizing principles are required to provide a consistent and natural description of physics beyond the standard Higgs.

  5. Higher order moments of the matter distribution in scale-free cosmological simulations with large dynamic range

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1994-01-01

    We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.

  6. The Effects of Glossary and Read-Aloud Accommodations on English Language Learners' Performance on a Mathematics Assessment

    ERIC Educational Resources Information Center

    Wolf, Mikyung Kim; Kim, Jinok; Kao, Jenny

    2012-01-01

    Glossary and reading aloud test items are commonly allowed in many states' accommodation policies for English language learner (ELL) students for large-scale mathematics assessments. However, little research is available regarding the effects of these accommodations on ELL students' performance. Further, no research exists that examines how…

  7. Timber markets and fuel treatments in the western US

    Treesearch

    Karen L. Abt; Jeffrey P. Prestemon

    2006-01-01

    We developed a model of interrelated timber markets in the U.S. West to assess the impacts of large-scale fuel reduction programs on these markets, and concomitant effects of the market on the fuel reduction programs. The linear programming spatial equilibrium model allows interstate and international trade with western Canada and the rest of the world, while...

  8. Large-scale monitoring of air pollution in remote and ecologically important areas

    Treesearch

    Andrzej Bytnerowicz; Witold Fraczek

    2013-01-01

    New advances in air quality monitoring techniques, such as passive samplers for nitrogenous (N) or sulphurous (S) pollutants and ozone (O3), have allowed for an improved understanding of concentrations of these pollutants in remote areas. Mountains create special problems with regard to the feasibility of establishing and maintaining air pollution monitoring networks,...

  9. Development of multitissue microfluidic dynamic array for assessing changes in gene expression associated with channel catfish appetite, growth, metabolism, and intestinal health

    USDA-ARS?s Scientific Manuscript database

    Large-scale, gene expression methods allow for high throughput analysis of physiological pathways at a fraction of the cost of individual gene expression analysis. Systems, such as the Fluidigm quantitative PCR array described here, can provide powerful assessments of the effects of diet, environme...

  10. Genome-wide association analysis based on multiple imputation with low-depth GBS data: application to biofuel traits in reed canarygrass

    USDA-ARS?s Scientific Manuscript database

    Genotyping-by-sequencing allows for large-scale genetic analyses in plant species with no reference genome, creating the challenge of sound inference in the presence of uncertain genotypes. Here we report an imputation-based genome-wide association study (GWAS) in reed canarygrass (Phalaris arundina...

  11. Place assessment: how people define ecosystems.

    Treesearch

    Steven J. Galliano; Gary M. Loeffler

    1999-01-01

    Understanding the concepts of place in ecosystem management may allow land managers to more actively inventory and understand the meanings that people attach to the lands and resources under the care of the land manager. Because place assessment has not been used operationally in past large-scale evaluations and analyses, it was necessary in the assessment of the...

  12. Soft Power and Hard Measures: Large-Scale Assessment, Citizenship and the European Union

    ERIC Educational Resources Information Center

    Rutkowski, David; Engel, Laura C.

    2010-01-01

    This article explores the International Civic and Citizenship Education Study (ICCS) with particular emphasis on the European Union's (EU's) involvement in the regional portion. Using the ICCS, the EU actively combines hard measures with soft power, allowing the EU to define and steer cross-national rankings of values of EU citizenship. The…

  13. Genome-wide association study based on multiple imputation with low-depth sequencing data: application to biofuel traits in reed canarygrass

    USDA-ARS?s Scientific Manuscript database

    Genotyping by sequencing allows for large-scale genetic analyses in plant species with no reference genome, but sets the challenge of sound inference in presence of uncertain genotypes. We report an imputation-based genome-wide association study (GWAS) in reed canarygrass (Phalaris arundinacea L., P...

  14. Assessing High Impact Practices Using NVivo: An Automated Approach to Analyzing Student Reflections for Program Improvement

    ERIC Educational Resources Information Center

    Blaney, Jennifer; Filer, Kimberly; Lyon, Julie

    2014-01-01

    Critical reflection allows students to synthesize their learning and deepen their understanding of an experience (Ash & Clayton, 2009). A recommended reflection method is for students to write essays about their experiences. However, on a large scale, such reflection essays become difficult to analyze in a meaningful way. At Roanoke College,…

  15. Diving in Head First: Finding the Volume of Norris lake

    ERIC Educational Resources Information Center

    Foster, Drew W.

    2008-01-01

    This article allows students to apply their knowledge and experience of area and volume to find the volume of Norris Lake, a large reservoir lake in Tennessee. Students have the opportunity to demonstrate their skills in using maps and scales as well as to incorporate the use of technology in developing the solution. This project satisfied the…

  16. Beyond Therapy Dogs: Coordinating Large-Scale Finals Week Activities

    ERIC Educational Resources Information Center

    Flynn, Holly

    2017-01-01

    Finals week activities have become increasingly popular in academic libraries in the last few years, but what is a library to do when it is not allowed to have therapy dogs? This column examines a progression of increasingly popular activities at Michigan State University Libraries. Included is an assessment of what makes them popular, our…

  17. Perceptions of Human Services Students about Social Change Education

    ERIC Educational Resources Information Center

    Herzberg, Judith T.

    2010-01-01

    Human services educators and scholars maintain that they are teaching social change theory and skills that will allow students to engage in large-scale social change. A review of the literature, from a critical theory perspective, offered little evidence that social change is being taught in human services programs. In this collective case study,…

  18. Constraining free riding in public goods games: designated solitary punishers can sustain human cooperation

    PubMed Central

    O'Gorman, Rick; Henrich, Joseph; Van Vugt, Mark

    2008-01-01

    Much of human cooperation remains an evolutionary riddle. Unlike other animals, people frequently cooperate with non-relatives in large groups. Evolutionary models of large-scale cooperation require not just incentives for cooperation, but also a credible disincentive for free riding. Various theoretical solutions have been proposed and experimentally explored, including reputation monitoring and diffuse punishment. Here, we empirically examine an alternative theoretical proposal: responsibility for punishment can be borne by one specific individual. This experiment shows that allowing a single individual to punish increases cooperation to the same level as allowing each group member to punish and results in greater group profits. These results suggest a potential key function of leadership in human groups and provides further evidence supporting that humans will readily and knowingly behave altruistically. PMID:18812292

  19. Genetic Approaches to Study Meiosis and Meiosis-Specific Gene Expression in Saccharomyces cerevisiae.

    PubMed

    Kassir, Yona; Stuart, David T

    2017-01-01

    The budding yeast Saccharomyces cerevisiae has a long history as a model organism for studies of meiosis and the cell cycle. The popularity of this yeast as a model is in large part due to the variety of genetic and cytological approaches that can be effectively performed with the cells. Cultures of the cells can be induced to synchronously progress through meiosis and sporulation allowing large-scale gene expression and biochemical studies to be performed. Additionally, the spore tetrads resulting from meiosis make it possible to characterize the haploid products of meiosis allowing investigation of meiotic recombination and chromosome segregation. Here we describe genetic methods for analysis progression of S. cerevisiae through meiosis and sporulation with an emphasis on strategies for the genetic analysis of regulators of meiosis-specific genes.

  20. Regional turbulence patterns driven by meso- and submesoscale processes in the Caribbean Sea

    NASA Astrophysics Data System (ADS)

    C. Pérez, Juan G.; R. Calil, Paulo H.

    2017-09-01

    The surface ocean circulation in the Caribbean Sea is characterized by the interaction between anticyclonic eddies and the Caribbean Upwelling System (CUS). These interactions lead to instabilities that modulate the transfer of kinetic energy up- or down-cascade. The interaction of North Brazil Current rings with the islands leads to the formation of submesoscale vorticity filaments leeward of the Lesser Antilles, thus transferring kinetic energy from large to small scales. Within the Caribbean, the upper ocean dynamic ranges from large-scale currents to coastal upwelling filaments and allow the vertical exchange of physical properties and supply KE to larger scales. In this study, we use a regional model with different spatial resolutions (6, 3, and 1 km), focusing on the Guajira Peninsula and the Lesser Antilles in the Caribbean Sea, in order to evaluate the impact of submesoscale processes on the regional KE energy cascade. Ageostrophic velocities emerge as the Rossby number becomes O(1). As model resolution is increased submesoscale motions are more energetic, as seen by the flatter KE spectra when compared to the lower resolution run. KE injection at the large scales is greater in the Guajira region than in the others regions, being more effectively transferred to smaller scales, thus showing that submesoscale dynamics is key in modulating eddy kinetic energy and the energy cascade within the Caribbean Sea.

  1. Conservation of reef manta rays (Manta alfredi) in a UNESCO World Heritage Site: Large-scale island development or sustainable tourism?

    PubMed Central

    Elamin, Nasreldin Alhasan; Yurkowski, David James; Chekchak, Tarik; Walter, Ryan Patrick; Klaus, Rebecca; Hill, Graham; Hussey, Nigel Edward

    2017-01-01

    A large reef manta ray (Manta alfredi) aggregation has been observed off the north Sudanese Red Sea coast since the 1950s. Sightings have been predominantly within the boundaries of a marine protected area (MPA), which was designated a UNESCO World Heritage Site in July 2016. Contrasting economic development trajectories have been proposed for the area (small-scale ecotourism and large-scale island development). To examine space-use, Wildlife Computers® SPOT 5 tags were secured to three manta rays. A two-state switching Bayesian state space model (BSSM), that allowed movement parameters to switch between resident and travelling, was fit to the recorded locations, and 50% and 95% kernel utilization distributions (KUD) home ranges calculated. A total of 682 BSSM locations were recorded between 30 October 2012 and 6 November 2013. Of these, 98.5% fell within the MPA boundaries; 99.5% for manta 1, 91.5% for manta 2, and 100% for manta 3. The BSSM identified that all three mantas were resident during 99% of transmissions, with 50% and 95% KUD home ranges falling mainly within the MPA boundaries. For all three mantas combined (88.4%), and all individuals (manta 1–92.4%, manta 2–64.9%, manta 3–91.9%), the majority of locations occurred within 15 km of the proposed large-scale island development. Results indicated that the MPA boundaries are spatially appropriate for manta rays in the region, however, a close association to the proposed large-scale development highlights the potential threat of disruption. Conversely, the focused nature of spatial use highlights the potential for reliable ecotourism opportunities. PMID:29069079

  2. Conservation of reef manta rays (Manta alfredi) in a UNESCO World Heritage Site: Large-scale island development or sustainable tourism?

    PubMed

    Kessel, Steven Thomas; Elamin, Nasreldin Alhasan; Yurkowski, David James; Chekchak, Tarik; Walter, Ryan Patrick; Klaus, Rebecca; Hill, Graham; Hussey, Nigel Edward

    2017-01-01

    A large reef manta ray (Manta alfredi) aggregation has been observed off the north Sudanese Red Sea coast since the 1950s. Sightings have been predominantly within the boundaries of a marine protected area (MPA), which was designated a UNESCO World Heritage Site in July 2016. Contrasting economic development trajectories have been proposed for the area (small-scale ecotourism and large-scale island development). To examine space-use, Wildlife Computers® SPOT 5 tags were secured to three manta rays. A two-state switching Bayesian state space model (BSSM), that allowed movement parameters to switch between resident and travelling, was fit to the recorded locations, and 50% and 95% kernel utilization distributions (KUD) home ranges calculated. A total of 682 BSSM locations were recorded between 30 October 2012 and 6 November 2013. Of these, 98.5% fell within the MPA boundaries; 99.5% for manta 1, 91.5% for manta 2, and 100% for manta 3. The BSSM identified that all three mantas were resident during 99% of transmissions, with 50% and 95% KUD home ranges falling mainly within the MPA boundaries. For all three mantas combined (88.4%), and all individuals (manta 1-92.4%, manta 2-64.9%, manta 3-91.9%), the majority of locations occurred within 15 km of the proposed large-scale island development. Results indicated that the MPA boundaries are spatially appropriate for manta rays in the region, however, a close association to the proposed large-scale development highlights the potential threat of disruption. Conversely, the focused nature of spatial use highlights the potential for reliable ecotourism opportunities.

  3. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    NASA Astrophysics Data System (ADS)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  4. Large scale track analysis for wide area motion imagery surveillance

    NASA Astrophysics Data System (ADS)

    van Leeuwen, C. J.; van Huis, J. R.; Baan, J.

    2016-10-01

    Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.

  5. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    PubMed

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  6. A Multiscale Survival Process for Modeling Human Activity Patterns.

    PubMed

    Zhang, Tianyang; Cui, Peng; Song, Chaoming; Zhu, Wenwu; Yang, Shiqiang

    2016-01-01

    Human activity plays a central role in understanding large-scale social dynamics. It is well documented that individual activity pattern follows bursty dynamics characterized by heavy-tailed interevent time distributions. Here we study a large-scale online chatting dataset consisting of 5,549,570 users, finding that individual activity pattern varies with timescales whereas existing models only approximate empirical observations within a limited timescale. We propose a novel approach that models the intensity rate of an individual triggering an activity. We demonstrate that the model precisely captures corresponding human dynamics across multiple timescales over five orders of magnitudes. Our model also allows extracting the population heterogeneity of activity patterns, characterized by a set of individual-specific ingredients. Integrating our approach with social interactions leads to a wide range of implications.

  7. Wafer level reliability for high-performance VLSI design

    NASA Technical Reports Server (NTRS)

    Root, Bryan J.; Seefeldt, James D.

    1987-01-01

    As very large scale integration architecture requires higher package density, reliability of these devices has approached a critical level. Previous processing techniques allowed a large window for varying reliability. However, as scaling and higher current densities push reliability to its limit, tighter control and instant feedback becomes critical. Several test structures developed to monitor reliability at the wafer level are described. For example, a test structure was developed to monitor metal integrity in seconds as opposed to weeks or months for conventional testing. Another structure monitors mobile ion contamination at critical steps in the process. Thus the reliability jeopardy can be assessed during fabrication preventing defective devices from ever being placed in the field. Most importantly, the reliability can be assessed on each wafer as opposed to an occasional sample.

  8. Clustering fossils in solid inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhshik, Mohammad, E-mail: m.akhshik@ipm.ir

    In solid inflation the single field non-Gaussianity consistency condition is violated. As a result, the long tenor perturbation induces observable clustering fossils in the form of quadrupole anisotropy in large scale structure power spectrum. In this work we revisit the bispectrum analysis for the scalar-scalar-scalar and tensor-scalar-scalar bispectrum for the general parameter space of solid. We consider the parameter space of the model in which the level of non-Gaussianity generated is consistent with the Planck constraints. Specializing to this allowed range of model parameter we calculate the quadrupole anisotropy induced from the long tensor perturbations on the power spectrum ofmore » the scalar perturbations. We argue that the imprints of clustering fossil from primordial gravitational waves on large scale structures can be detected from the future galaxy surveys.« less

  9. Topological structure dynamics revealing collective evolution in active nematics

    PubMed Central

    Shi, Xia-qing; Ma, Yu-qiang

    2013-01-01

    Topological defects frequently emerge in active matter like bacterial colonies, cytoskeleton extracts on substrates, self-propelled granular or colloidal layers and so on, but their dynamical properties and the relations to large-scale organization and fluctuations in these active systems are seldom touched. Here we reveal, through a simple model for active nematics using self-driven hard elliptic rods, that the excitation, annihilation and transportation of topological defects differ markedly from those in non-active media. These dynamical processes exhibit strong irreversibility in active nematics in the absence of detailed balance. Moreover, topological defects are the key factors in organizing large-scale dynamic structures and collective flows, resulting in multi-spatial temporal effects. These findings allow us to control the self-organization of active matter through topological structures. PMID:24346733

  10. Motions of charged particles in the Magnetosphere under the influence of a time-varying large scale convection electric field

    NASA Technical Reports Server (NTRS)

    Smith, P. H.; Bewtra, N. K.; Hoffman, R. A.

    1979-01-01

    The motions of charged particles under the influence of the geomagnetic and electric fields were quite complex in the region of the inner magnetosphere. The Volland-Stern type large scale convection electric field was used successfully to predict both the plasmapause location and particle enhancements determined from Explorer 45 measurements. A time dependence in this electric field was introduced based on the variation in Kp for actual magnetic storm conditions. The particle trajectories were computed as they change in this time-varying electric field. Several storm fronts of particles of different magnetic moments were allowed to be injected into the inner magnetosphere from L = 10 in the equatorial plane. The motions of these fronts are presented in a movie format.

  11. Beta decay rates of neutron-rich nuclei

    NASA Astrophysics Data System (ADS)

    Marketin, Tomislav; Huther, Lutz; Martínez-Pinedo, Gabriel

    2015-10-01

    Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. Currently, a single large-scale calculation is available based on a QRPA calculation with a schematic interaction on top of the Finite Range Droplet Model. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei.

  12. Zero inflation in ordinal data: Incorporating susceptibility to response through the use of a mixture model

    PubMed Central

    Kelley, Mary E.; Anderson, Stewart J.

    2008-01-01

    Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711

  13. Soil-geographical regionalization as a basis for digital soil mapping: Karelia case study

    NASA Astrophysics Data System (ADS)

    Krasilnikov, P.; Sidorova, V.; Dubrovina, I.

    2010-12-01

    Recent development of digital soil mapping (DSM) allowed improving significantly the quality of soil maps. We tried to make a set of empirical models for the territory of Karelia, a republic at the North-East of the European territory of Russian Federation. This territory was selected for the pilot study for DSM for two reasons. First, the soils of the region are mainly monogenetic; thus, the effect of paleogeographic environment on recent soils is reduced. Second, the territory was poorly mapped because of low agricultural development: only 1.8% of the total area of the republic is used for agriculture and has large-scale soil maps. The rest of the territory has only small-scale soil maps, compiled basing on the general geographic concepts rather than on field surveys. Thus, the only solution for soil inventory was the predictive digital mapping. The absence of large-scaled soil maps did not allow data mining from previous soil surveys, and only empirical models could be applied. For regionalization purposes, we accepted the division into Northern and Southern Karelia, proposed in the general scheme of soil regionalization of Russia; boundaries between the regions were somewhat modified. Within each region, we specified from 15 (Northern Karelia) to 32 (Southern Karelia) individual soilscapes and proposed soil-topographic and soil-lithological relationships for every soilscape. Further field verification is needed to adjust the models.

  14. Cosmographical Implications

    NASA Astrophysics Data System (ADS)

    Wright, E. L.

    1992-12-01

    The COBE() DMR observation of large scale anisotropy of the CMBR allows one to compare the gravitational potential measured using Delta T to the gravitational forces required to produce the observed clustering of galaxies. This comparison helps to define the allowed range of cosmological models. As shown by Wright etal 1992, the COBE Delta T agrees quite well with the bulk flow velocity measured by Bertschinger etal 1990 in a window of radius 6000 km/sec. This is the best evidence that the initial perturbation spectrum in fact followed the Harrison-Zeldovich (and inflationary) prediction that P(k) ~ k(n) with n = 1. Assuming that n ~ 1, one can deduce information about the nature of the matter in the Universe: the first conclusion is that a large amount of non-baryonic dark matter is required. The second conclusion is that a linearly evolving model dominated by Cold Dark Matter produces too little structure on 2500 km/sec scales. However, mixed Cold Plus Hot Dark Matter models, vacuum dominated models, or the Couchman & Carlberg (1992) non-linear recipe for making galaxies out of CDM all seem to reproduce the observed structures on scales from 500-6,000 km/sec while connecting to the COBE results with the expected n ~ 1 slope. () COBE is supported by NASA's Astrophysics Division. Goddard Space Flight Center (GSFC), under the scientific guidance of the COBE Science Working Group, is responsible for the development and operation of COBE.

  15. Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.

    PubMed

    Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme

    2014-03-01

    Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.

  16. Dark matter and cosmological nucleosynthesis

    NASA Technical Reports Server (NTRS)

    Schramm, D. N.

    1986-01-01

    Existing dark matter problems, i.e., dynamics, galaxy formation and inflation, are considered, along with a model which proposes dark baryons as the bulk of missing matter in a fractal universe. It is shown that no combination of dark, nonbaryonic matter can either provide a cosmological density parameter value near unity or, as in the case of high energy neutrinos, allow formation of condensed matter at epochs when quasars already existed. The possibility that correlations among galactic clusters are scale-free is discussed. Such a distribution of matter would yield a fractal of 1.2, close to a one-dimensional universe. Biasing, cosmic superstrings, and percolated explosions and hot dark matter are theoretical approaches that would satisfy the D = 1.2 fractal model of the large-scale structure of the universe and which would also allow sufficient dark matter in halos to close the universe.

  17. The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  18. Implementation of the Agitated Behavior Scale in the Electronic Health Record.

    PubMed

    Wilson, Helen John; Dasgupta, Kritis; Michael, Kathleen

    The purpose of the study was to implement an Agitated Behavior Scale through an electronic health record and to evaluate the usability of the scale in a brain injury unit at a rehabilitation hospital. A quality improvement project was conducted in the brain injury unit at a large rehabilitation hospital with registered nurses as participants using convenience sampling. The project consisted of three phases and included education, implementation of the scale in the electronic health record, and administration of the survey questionnaire, which utilized the system usability scale. The Agitated Behavior Scale was found to be usable, and there was 92.2% compliance with the use of the electronic Electronic Agitated Behavior Scale. The Agitated Behavior Scale was effectively implemented in the electronic health record and was found to be usable in the assessment of agitation. Utilization of the scale through the electronic health record on a daily basis will allow for an early identification of agitation in patients with traumatic brain injury and enable prompt interventions to manage agitation.

  19. Realization of a Tunable Dissipation Scale in a Turbulent Cascade using a Quantum Gas

    NASA Astrophysics Data System (ADS)

    Navon, Nir; Eigen, Christoph; Zhang, Jinyi; Lopes, Raphael; Smith, Robert; Hadzibabic, Zoran

    2017-04-01

    Many turbulent flows form so-called cascades, where excitations injected at large length scales, are transported to gradually smaller scales until they reach a dissipation scale. We initiate a turbulent cascade in a dilute Bose fluid by pumping energy at the container scale of an optical box trap using an oscillating magnetic force. In contrast to classical fluids where the dissipation scale is set by the viscosity of the fluid, the turbulent cascade of our quantum gas finishes when the particles kinetic energy exceeds the laser-trap depth. This mechanism thus allows us to effectively tune the dissipation scale where particles (and energy) are lost, and measure the particle flux in the cascade at the dissipation scale. We observe a unit power-law decay of the particle-dissipation rate with trap depth, which confirms the surprising prediction that in a wave-turbulent direct energy cascade, the particle flux vanishes in the ideal limit where the dissipation length scale tends to zero.

  20. Evolution of the magnetorotational instability on initially tangled magnetic fields

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.; Subramanian, Kandaswamy

    2017-12-01

    The initial magnetic field of previous magnetorotational instability (MRI) simulations has always included a significant system-scale component, even if stochastic. However, it is of conceptual and practical interest to assess whether the MRI can grow when the initial field is turbulent. The ubiquitous presence of turbulent or random flows in astrophysical plasmas generically leads to a small-scale dynamo (SSD), which would provide initial seed turbulent velocity and magnetic fields in the plasma that becomes an accretion disc. Can the MRI grow from these more realistic initial conditions? To address this, we supply a standard shearing box with isotropically forced SSD generated magnetic and velocity fields as initial conditions and remove the forcing. We find that if the initially supplied fields are too weak or too incoherent, they decay from the initial turbulent cascade faster than they can grow via the MRI. When the initially supplied fields are sufficient to allow MRI growth and sustenance, the saturated stresses, large-scale fields and power spectra match those of the standard zero net flux MRI simulation with an initial large-scale vertical field.

  1. Lagrangian velocity and acceleration correlations of large inertial particles in a closed turbulent flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machicoane, Nathanaël; Volk, Romain

    We investigate the response of large inertial particle to turbulent fluctuations in an inhomogeneous and anisotropic flow. We conduct a Lagrangian study using particles both heavier and lighter than the surrounding fluid, and whose diameters are comparable to the flow integral scale. Both velocity and acceleration correlation functions are analyzed to compute the Lagrangian integral time and the acceleration time scale of such particles. The knowledge of how size and density affect these time scales is crucial in understanding particle dynamics and may permit stochastic process modelization using two-time models (for instance, Sawford’s). As particles are tracked over long timesmore » in the quasi-totality of a closed flow, the mean flow influences their behaviour and also biases the velocity time statistics, in particular the velocity correlation functions. By using a method that allows for the computation of turbulent velocity trajectories, we can obtain unbiased Lagrangian integral time. This is particularly useful in accessing the scale separation for such particles and to comparing it to the case of fluid particles in a similar configuration.« less

  2. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.

    1996-12-17

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.

  3. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei

    1996-01-01

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.

  4. Source imaging of potential fields through a matrix space-domain algorithm

    NASA Astrophysics Data System (ADS)

    Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio

    2017-01-01

    Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.

  5. A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis

    NASA Astrophysics Data System (ADS)

    Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.

    2006-12-01

    Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.

  6. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP; Soares, Thereza A.

    2007-12-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  7. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP

    2008-03-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  8. Event management for large scale event-driven digital hardware spiking neural networks.

    PubMed

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Optimizing the scale of markets for water quality trading

    NASA Astrophysics Data System (ADS)

    Doyle, Martin W.; Patterson, Lauren A.; Chen, Yanyou; Schnier, Kurt E.; Yates, Andrew J.

    2014-09-01

    Applying market approaches to environmental regulations requires establishing a spatial scale for trading. Spatially large markets usually increase opportunities for abatement cost savings but increase the potential for pollution damages (hot spots), vice versa for spatially small markets. We develop a coupled hydrologic-economic modeling approach for application to point source emissions trading by a large number of sources and apply this approach to the wastewater treatment plants (WWTPs) within the watershed of the second largest estuary in the U.S. We consider two different administrative structures that govern the trade of emission permits: one-for-one trading (the number of permits required for each unit of emission is the same for every WWTP) and trading ratios (the number of permits required for each unit of emissions varies across WWTP). Results show that water quality regulators should allow trading to occur at the river basin scale as an appropriate first-step policy, as is being done in a limited number of cases via compliance associations. Larger spatial scales may be needed under conditions of increased abatement costs. The optimal scale of the market is generally the same regardless of whether one-for-one trading or trading ratios are employed.

  10. A result about scale transformation families in approximation

    NASA Astrophysics Data System (ADS)

    Apprato, Dominique; Gout, Christian

    2000-06-01

    Scale transformations are common in approximation. In surface approximation from rapidly varying data, one wants to suppress, or at least dampen the oscillations of the approximation near steep gradients implied by the data. In that case, scale transformations can be used to give some control over overshoot when the surface has large variations of its gradient. Conversely, in image analysis, scale transformations are used in preprocessing to enhance some features present on the image or to increase jumps of grey levels before segmentation of the image. In this paper, we establish the convergence of an approximation method which allows some control over the behavior of the approximation. More precisely, we study the convergence of an approximation from a data set of , while using scale transformations on the values before and after classical approximation. In addition, the construction of scale transformations is also given. The algorithm is presented with some numerical examples.

  11. Roll-to-Roll printed large-area all-polymer solar cells with 5% efficiency based on a low crystallinity conjugated polymer blend

    NASA Astrophysics Data System (ADS)

    Gu, Xiaodan; Zhou, Yan; Gu, Kevin; Kurosawa, Tadanori; Yan, Hongping; Wang, Cheng; Toney, Micheal; Bao, Zhenan

    The challenge of continuous printing in high efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution coated all-polymer bulk heterojunction (BHJ) solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, our results showed that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers. This methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. We were able to continuously roll-to-roll slot die print large area all-polymer solar cells with power conversion efficiencies of 5%, with combined cell area up to 10 cm2. This is among the highest efficiencies realized with R2R coated active layer organic materials on flexible substrate. DOE BRIDGE sunshot program. Office of Naval Research.

  12. Roll-to-Roll Printed Large-Area All-Polymer Solar Cells with 5% Efficiency Based on a Low Crystallinity Conjugated Polymer Blend

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Xiaodan; Zhou, Yan; Gu, Kevin

    The challenge of continuous printing in high-efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution-coated all-polymer bulk heterojunction solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity, and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, the results show that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers.more » This particular methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small-scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. Large-area all-polymer solar cells are continuously roll-to-roll slot die printed with power conversion efficiencies of 5%, with combined cell area up to 10 cm 2. This is among the highest efficiencies realized with R2R-coated active layer organic materials on flexible substrate.« less

  13. Roll-to-Roll Printed Large-Area All-Polymer Solar Cells with 5% Efficiency Based on a Low Crystallinity Conjugated Polymer Blend

    DOE PAGES

    Gu, Xiaodan; Zhou, Yan; Gu, Kevin; ...

    2017-03-07

    The challenge of continuous printing in high-efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution-coated all-polymer bulk heterojunction solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity, and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, the results show that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers.more » This particular methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small-scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. Large-area all-polymer solar cells are continuously roll-to-roll slot die printed with power conversion efficiencies of 5%, with combined cell area up to 10 cm 2. This is among the highest efficiencies realized with R2R-coated active layer organic materials on flexible substrate.« less

  14. Large-scale generation of human iPSC-derived neural stem cells/early neural progenitor cells and their neuronal differentiation.

    PubMed

    D'Aiuto, Leonardo; Zhi, Yun; Kumar Das, Dhanjit; Wilcox, Madeleine R; Johnson, Jon W; McClain, Lora; MacDonald, Matthew L; Di Maio, Roberto; Schurdak, Mark E; Piazza, Paolo; Viggiano, Luigi; Sweet, Robert; Kinchington, Paul R; Bhattacharjee, Ayantika G; Yolken, Robert; Nimgaonka, Vishwajit L; Nimgaonkar, Vishwajit L

    2014-01-01

    Induced pluripotent stem cell (iPSC)-based technologies offer an unprecedented opportunity to perform high-throughput screening of novel drugs for neurological and neurodegenerative diseases. Such screenings require a robust and scalable method for generating large numbers of mature, differentiated neuronal cells. Currently available methods based on differentiation of embryoid bodies (EBs) or directed differentiation of adherent culture systems are either expensive or are not scalable. We developed a protocol for large-scale generation of neuronal stem cells (NSCs)/early neural progenitor cells (eNPCs) and their differentiation into neurons. Our scalable protocol allows robust and cost-effective generation of NSCs/eNPCs from iPSCs. Following culture in neurobasal medium supplemented with B27 and BDNF, NSCs/eNPCs differentiate predominantly into vesicular glutamate transporter 1 (VGLUT1) positive neurons. Targeted mass spectrometry analysis demonstrates that iPSC-derived neurons express ligand-gated channels and other synaptic proteins and whole-cell patch-clamp experiments indicate that these channels are functional. The robust and cost-effective differentiation protocol described here for large-scale generation of NSCs/eNPCs and their differentiation into neurons paves the way for automated high-throughput screening of drugs for neurological and neurodegenerative diseases.

  15. Captured metagenomics: large-scale targeting of genes based on ‘sequence capture’ reveals functional diversity in soils

    PubMed Central

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K.; Hedlund, Katarina; Ahrén, Dag

    2015-01-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. PMID:26490729

  16. A New Method for Rapid Screening of End-Point PCR Products: Application to Single Genome Amplified HIV and SIV Envelope Amplicons

    PubMed Central

    Houzet, Laurent; Deleage, Claire; Satie, Anne-Pascale; Merlande, Laetitia; Mahe, Dominique; Dejucq-Rainsford, Nathalie

    2015-01-01

    PCR is the most widely applied technique for large scale screening of bacterial clones, mouse genotypes, virus genomes etc. A drawback of large PCR screening is that amplicon analysis is usually performed using gel electrophoresis, a step that is very labor intensive, tedious and chemical waste generating. Single genome amplification (SGA) is used to characterize the diversity and evolutionary dynamics of virus populations within infected hosts. SGA is based on the isolation of single template molecule using limiting dilution followed by nested PCR amplification and requires the analysis of hundreds of reactions per sample, making large scale SGA studies very challenging. Here we present a novel approach entitled Long Amplicon Melt Profiling (LAMP) based on the analysis of the melting profile of the PCR reactions using SYBR Green and/or EvaGreen fluorescent dyes. The LAMP method represents an attractive alternative to gel electrophoresis and enables the quick discrimination of positive reactions. We validate LAMP for SIV and HIV env-SGA, in 96- and 384-well plate formats. Because the melt profiling allows the screening of several thousands of PCR reactions in a cost-effective, rapid and robust way, we believe it will greatly facilitate any large scale PCR screening. PMID:26053379

  17. Carbon Cycle Model Linkage Project (CCMLP): Evaluating Biogeochemical Process Models with Atmospheric Measurements and Field Experiments

    NASA Astrophysics Data System (ADS)

    Heimann, M.; Prentice, I. C.; Foley, J.; Hickler, T.; Kicklighter, D. W.; McGuire, A. D.; Melillo, J. M.; Ramankutty, N.; Sitch, S.

    2001-12-01

    Models of biophysical and biogeochemical proceses are being used -either offline or in coupled climate-carbon cycle (C4) models-to assess climate- and CO2-induced feedbacks on atmospheric CO2. Observations of atmospheric CO2 concentration, and supplementary tracers including O2 concentrations and isotopes, offer unique opportunities to evaluate the large-scale behaviour of models. Global patterns, temporal trends, and interannual variability of the atmospheric CO2 concentration and its seasonal cycle provide crucial benchmarks for simulations of regionally-integrated net ecosystem exchange; flux measurements by eddy correlation allow a far more demanding model test at the ecosystem scale than conventional indicators, such as measurements of annual net primary production; and large-scale manipulations, such as the Duke Forest Free Air Carbon Enrichment (FACE) experiment, give a standard to evaluate modelled phenomena such as ecosystem-level CO2 fertilization. Model runs including historical changes of CO2, climate and land use allow comparison with regional-scale monthly CO2 balances as inferred from atmospheric measurements. Such comparisons are providing grounds for some confidence in current models, while pointing to processes that may still be inadequately treated. Current plans focus on (1) continued benchmarking of land process models against flux measurements across ecosystems and experimental findings on the ecosystem-level effects of enhanced CO2, reactive N inputs and temperature; (2) improved representation of land use, forest management and crop metabolism in models; and (3) a strategy for the evaluation of C4 models in a historical observational context.

  18. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  19. Observing the Cosmic Microwave Background Polarization with Variable-delay Polarization Modulators for the Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; CLASS Collaboration

    2018-01-01

    The search for inflationary primordial gravitational waves and the optical depth to reionization, both through their imprint on the large angular scale correlations in the polarization of the cosmic microwave background (CMB), has created the need for high sensitivity measurements of polarization across large fractions of the sky at millimeter wavelengths. These measurements are subjected to instrumental and atmospheric 1/f noise, which has motivated the development of polarization modulators to facilitate the rejection of these large systematic effects.Variable-delay polarization modulators (VPMs) are used in the Cosmology Large Angular Scale Surveyor (CLASS) telescopes as the first element in the optical chain to rapidly modulate the incoming polarization. VPMs consist of a linearly polarizing wire grid in front of a moveable flat mirror; varying the distance between the grid and the mirror produces a changing phase shift between polarization states parallel and perpendicular to the grid which modulates Stokes U (linear polarization at 45°) and Stokes V (circular polarization). The reflective and scalable nature of the VPM enables its placement as the first optical element in a reflecting telescope. This simultaneously allows a lock-in style polarization measurement and the separation of sky polarization from any instrumental polarization farther along in the optical chain.The Q-Band CLASS VPM was the first VPM to begin observing the CMB full time in 2016. I will be presenting its design and characterization as well as demonstrating how modulating polarization significantly rejects atmospheric and instrumental long time scale noise.

  20. TV Audience Measurement with Big Data.

    PubMed

    Hill, Shawndra

    2014-06-01

    TV audience measurement involves estimating the number of viewers tuned into a TV show at any given time as well as their demographics. First introduced shortly after commercial television broadcasting began in the late 1940s, audience measurement allowed the business of television to flourish by offering networks a way to quantify the monetary value of TV audiences for advertisers, who pay for the estimated number of eyeballs watching during commercials. The first measurement techniques suffered from multiple limitations because reliable, large-scale data were costly to acquire. Yet despite these limitations, measurement standards remained largely unchanged for decades until devices such as cable boxes, video-on-demand boxes, and cell phones, as well as web apps, Internet browser clicks, web queries, and social media activity, resulted in an explosion of digitally available data. TV viewers now leave digital traces that can be used to track almost every aspect of their daily lives, allowing the potential for large-scale aggregation across data sources for individual users and groups and enabling the tracking of more people on more dimensions for more shows. Data are now more comprehensive, available in real time, and cheaper to acquire, enabling accurate and fine-grained TV audience measurement. In this article, I discuss the evolution of audience measurement and what the recent data explosion means for the TV industry and academic research.

  1. Large Scale Software Building with CMake in ATLAS

    NASA Astrophysics Data System (ADS)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  2. Large-scale SNP discovery and construction of a high-density genetic map of Colossoma macropomum through genotyping-by-sequencing

    PubMed Central

    Nunes, José de Ribamar da Silva; Liu, Shikai; Pértille, Fábio; Perazza, Caio Augusto; Villela, Priscilla Marqui Schmidt; de Almeida-Val, Vera Maria Fonseca; Hilsdorf, Alexandre Wagner Silva; Liu, Zhanjiang; Coutinho, Luiz Lehmann

    2017-01-01

    Colossoma macropomum, or tambaqui, is the largest native Characiform species found in the Amazon and Orinoco river basins, yet few resources for genetic studies and the genetic improvement of tambaqui exist. In this study, we identified a large number of single-nucleotide polymorphisms (SNPs) for tambaqui and constructed a high-resolution genetic linkage map from a full-sib family of 124 individuals and their parents using the genotyping by sequencing method. In all, 68,584 SNPs were initially identified using minimum minor allele frequency (MAF) of 5%. Filtering parameters were used to select high-quality markers for linkage analysis. We selected 7,734 SNPs for linkage mapping, resulting in 27 linkage groups with a minimum logarithm of odds (LOD) of 8 and maximum recombination fraction of 0.35. The final genetic map contains 7,192 successfully mapped markers that span a total of 2,811 cM, with an average marker interval of 0.39 cM. Comparative genomic analysis between tambaqui and zebrafish revealed variable levels of genomic conservation across the 27 linkage groups which allowed for functional SNP annotations. The large-scale SNP discovery obtained here, allowed us to build a high-density linkage map in tambaqui, which will be useful to enhance genetic studies that can be applied in breeding programs. PMID:28387238

  3. An S N Algorithm for Modern Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Randal Scott

    2016-08-29

    LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures, where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, we describe a modified KBA algorithm that allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.

  4. A Comparison of Hybrid Reynolds Averaged Navier Stokes/Large Eddy Simulation (RANS/LES) and Unsteady RANS Predictions of Separated Flow for a Variable Speed Power Turbine Blade Operating with Low Inlet Turbulence Levels

    DTIC Science & Technology

    2017-10-01

    Facility is a large-scale cascade that allows detailed flow field surveys and blade surface measurements.10–12 The facility has a continuous run ...structured grids at 2 flow conditions, cruise and takeoff, of the VSPT blade . Computations were run in parallel on a Department of Defense...RANS/LES) and Unsteady RANS Predictions of Separated Flow for a Variable-Speed Power- Turbine Blade Operating with Low Inlet Turbulence Levels

  5. Relating large-scale subsidence to convection development in Arctic mixed-phase marine stratocumulus

    NASA Astrophysics Data System (ADS)

    Young, Gillian; Connolly, Paul J.; Dearden, Christopher; Choularton, Thomas W.

    2018-02-01

    Large-scale subsidence, associated with high-pressure systems, is often imposed in large-eddy simulation (LES) models to maintain the height of boundary layer (BL) clouds. Previous studies have considered the influence of subsidence on warm liquid clouds in subtropical regions; however, the relationship between subsidence and mixed-phase cloud microphysics has not specifically been studied. For the first time, we investigate how widespread subsidence associated with synoptic-scale meteorological features can affect the microphysics of Arctic mixed-phase marine stratocumulus (Sc) clouds. Modelled with LES, four idealised scenarios - a stable Sc, varied droplet (Ndrop) or ice (Nice) number concentrations, and a warming surface (representing motion southwards) - were subjected to different levels of subsidence to investigate the cloud microphysical response. We find strong sensitivities to large-scale subsidence, indicating that high-pressure systems in the ocean-exposed Arctic regions have the potential to generate turbulence and changes in cloud microphysics in any resident BL mixed-phase clouds.Increased cloud convection is modelled with increased subsidence, driven by longwave radiative cooling at cloud top and rain evaporative cooling and latent heating from snow growth below cloud. Subsidence strengthens the BL temperature inversion, thus reducing entrainment and allowing the liquid- and ice-water paths (LWPs, IWPs) to increase. Through increased cloud-top radiative cooling and subsequent convective overturning, precipitation production is enhanced: rain particle number concentrations (Nrain), in-cloud rain mass production rates, and below-cloud evaporation rates increase with increased subsidence.Ice number concentrations (Nice) play an important role, as greater concentrations suppress the liquid phase; therefore, Nice acts to mediate the strength of turbulent overturning promoted by increased subsidence. With a warming surface, a lack of - or low - subsidence allows for rapid BL turbulent kinetic energy (TKE) coupling, leading to a heterogeneous cloud layer, cloud-top ascent, and cumuli formation below the Sc cloud. In these scenarios, higher levels of subsidence act to stabilise the Sc layer, where the combination of these two forcings counteract one another to produce a stable, yet dynamic, cloud layer.

  6. The challenges associated with developing science-based landscape scale management plans.

    Treesearch

    Robert C. Szaro; Douglas A. Jr. Boyce; Thomas Puchlerz

    2005-01-01

    Planning activities over large landscapes poses a complex of challenges when trying to balance the implementation of a conservation strategy while still allowing for a variety of consumptive and nonconsumptive uses. We examine a case in southeast Alaska to illustrate the breadth of these challenges and an approach to developing a science-based resource plan. Not only...

  7. STEREOMATRIX 3-D display system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteside, Stephen Earl

    1973-08-01

    STEREOMATRIX is a large-screen interactive 3-D laser display system which presents computer-generated wire figures stereoscopically. The presented image can be rotated, translated, and scaled by the system user and the perspective of the image is changed according to the position of the user. A cursor may be positioned in three dimensions to identify points and allows communication with the computer.

  8. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    NASA Astrophysics Data System (ADS)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  9. The Cell Collective: Toward an open and collaborative approach to systems biology

    PubMed Central

    2012-01-01

    Background Despite decades of new discoveries in biomedical research, the overwhelming complexity of cells has been a significant barrier to a fundamental understanding of how cells work as a whole. As such, the holistic study of biochemical pathways requires computer modeling. Due to the complexity of cells, it is not feasible for one person or group to model the cell in its entirety. Results The Cell Collective is a platform that allows the world-wide scientific community to create these models collectively. Its interface enables users to build and use models without specifying any mathematical equations or computer code - addressing one of the major hurdles with computational research. In addition, this platform allows scientists to simulate and analyze the models in real-time on the web, including the ability to simulate loss/gain of function and test what-if scenarios in real time. Conclusions The Cell Collective is a web-based platform that enables laboratory scientists from across the globe to collaboratively build large-scale models of various biological processes, and simulate/analyze them in real time. In this manuscript, we show examples of its application to a large-scale model of signal transduction. PMID:22871178

  10. Quantifying design trade-offs of beryllium targets on NIF

    NASA Astrophysics Data System (ADS)

    Yi, S. A.; Zylstra, A. B.; Kline, J. L.; Loomis, E. N.; Kyrala, G. A.; Shah, R. C.; Perry, T. S.; Kanzleiter, R. J.; Batha, S. H.; MacLaren, S. A.; Ralph, J. E.; Masse, L. P.; Salmonson, J. D.; Tipton, R. E.; Callahan, D. A.; Hurricane, O. A.

    2017-10-01

    An important determinant of target performance is implosion kinetic energy, which scales with the capsule size. The maximum achievable performance for a given laser is thus related to the largest capsule that can be imploded symmetrically, constrained by drive uniformity. A limiting factor for symmetric radiation drive is the ratio of hohlraum to capsule radii, or case-to-capsule ratio (CCR). For a fixed laser energy, a larger hohlraum allows for driving bigger capsules symmetrically at the cost of reduced peak radiation temperature (Tr). Beryllium ablators may thus allow for unique target design trade-offs due to their higher ablation efficiency at lower Tr. By utilizing larger hohlraum sizes than most modern NIF designs, beryllium capsules thus have the potential to operate in unique regions of the target design parameter space. We present design simulations of beryllium targets with a large CCR = 4.3 3.7 . These are scaled surrogates of large hohlraum low Tr beryllium targets, with the goal of quantifying symmetry tunability as a function of CCR. This work performed under the auspices of the U.S. DOE by LANL under contract DE-AC52- 06NA25396, and by LLNL under Contract DE-AC52-07NA27344.

  11. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    PubMed

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  12. Experimental Quantification of Pore-Scale Flow Phenomena in 2D Heterogeneous Porous Micromodels: Multiphase Flow Towards Coupled Solid-Liquid Interactions

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kazemifar, F.; Blois, G.; Christensen, K. T.

    2017-12-01

    Geological sequestration of CO2 within saline aquifers is a viable technology for reducing CO2 emissions. Central to this goal is accurately predicting both the fidelity of candidate sites pre-injection of CO2 and its post-injection migration. Moreover, local fluid pressure buildup may cause activation of small pre-existing unidentified faults, leading to micro-seismic events, which could prove disastrous for societal acceptance of CCS, and possibly compromise seal integrity. Recent evidence shows that large-scale events are coupled with pore-scale phenomena, which necessitates the representation of pore-scale stress, strain, and multiphase flow processes in large-scale modeling. To this end, the pore-scale flow of water and liquid/supercritical CO2 is investigated under reservoir-relevant conditions, over a range of wettability conditions in 2D heterogeneous micromodels that reflect the complexity of a real sandstone. High-speed fluorescent microscopy, complemented by a fast differential pressure transmitter, allows for simultaneous measurement of the flow field within and the instantaneous pressure drop across the micromodels. A flexible micromodel is also designed and fabricated, to be used in conjunction with the micro-PIV technique, enabling the quantification of coupled solid-liquid interactions.

  13. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    NASA Astrophysics Data System (ADS)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  14. How much does a tokamak reactor cost?

    NASA Astrophysics Data System (ADS)

    Freidberg, J.; Cerfon, A.; Ballinger, S.; Barber, J.; Dogra, A.; McCarthy, W.; Milanese, L.; Mouratidis, T.; Redman, W.; Sandberg, A.; Segal, D.; Simpson, R.; Sorensen, C.; Zhou, M.

    2017-10-01

    The cost of a fusion reactor is of critical importance to its ultimate acceptability as a commercial source of electricity. While there are general rules of thumb for scaling both overnight cost and levelized cost of electricity the corresponding relations are not very accurate or universally agreed upon. We have carried out a series of scaling studies of tokamak reactor costs based on reasonably sophisticated plasma and engineering models. The analysis is largely analytic, requiring only a simple numerical code, thus allowing a very large number of designs. Importantly, the studies are aimed at plasma physicists rather than fusion engineers. The goals are to assess the pros and cons of steady state burning plasma experiments and reactors. One specific set of results discusses the benefits of higher magnetic fields, now possible because of the recent development of high T rare earth superconductors (REBCO); with this goal in mind, we calculate quantitative expressions, including both scaling and multiplicative constants, for cost and major radius as a function of central magnetic field.

  15. Nexus of the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Cautun, Marius; van de Weygaert, Rien; Jones, Bernard J. T.; Frenk, Carlos S.; Hellwing, Wojciech A.

    2015-01-01

    One of the important unknowns of current cosmology concerns the effects of the large scale distribution of matter on the formation and evolution of dark matter haloes and galaxies. One main difficulty in answering this question lies in the absence of a robust and natural way of identifying the large scale environments and their characteristics. This work summarizes the NEXUS+ formalism which extends and improves our multiscale scale-space MMF method. The new algorithm is very successful in tracing the Cosmic Web components, mainly due to its novel filtering of the density in logarithmic space. The method, due to its multiscale and hierarchical character, has the advantage of detecting all the cosmic structures, either prominent or tenuous, without preference for a certain size or shape. The resulting filamentary and wall networks can easily be characterized by their direction, thickness, mass density and density profile. These additional environmental properties allows to us to investigate not only the effect of environment on haloes, but also how it correlates with the environment characteristics.

  16. Turbulent mass transfer caused by vortex induced reconnection in collisionless magnetospheric plasmas.

    PubMed

    Nakamura, T K M; Hasegawa, H; Daughton, W; Eriksson, S; Li, W Y; Nakamura, R

    2017-11-17

    Magnetic reconnection is believed to be the main driver to transport solar wind into the Earth's magnetosphere when the magnetopause features a large magnetic shear. However, even when the magnetic shear is too small for spontaneous reconnection, the Kelvin-Helmholtz instability driven by a super-Alfvénic velocity shear is expected to facilitate the transport. Although previous kinetic simulations have demonstrated that the non-linear vortex flows from the Kelvin-Helmholtz instability gives rise to vortex-induced reconnection and resulting plasma transport, the system sizes of these simulations were too small to allow the reconnection to evolve much beyond the electron scale as recently observed by the Magnetospheric Multiscale (MMS) spacecraft. Here, based on a large-scale kinetic simulation and its comparison with MMS observations, we show for the first time that ion-scale jets from vortex-induced reconnection rapidly decay through self-generated turbulence, leading to a mass transfer rate nearly one order higher than previous expectations for the Kelvin-Helmholtz instability.

  17. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  18. Thin Disk Accretion in the Magnetically-Arrested State

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan; Reynolds, Christopher S.

    2016-01-01

    Shakura-Sunyaev thin disk theory is fundamental to black hole astrophysics. Though applications of the theory are wide-spread and powerful tools for explaining observations, such as Soltan's argument using quasar power, broadened iron line measurements, continuum fitting, and recently reverberation mapping, a significant large-scale magnetic field causes substantial deviations from standard thin disk behavior. We have used fully 3D general relativistic MHD simulations with cooling to explore the thin (H/R~0.1) magnetically arrested disk (MAD) state and quantify these deviations. This work demonstrates that accumulation of large-scale magnetic flux into the MAD state is possible, and then extends prior numerical studies of thicker disks, allowing us to measure how jet power scales with the disk state, providing a natural explanation of phenomena like jet quenching in the high-soft state of X-ray binaries. We have also simulated thin MAD disks with a misaligned black hole spin axis in order to understand further deviations from thin disk theory that may significantly affect observations.

  19. Quantum information processing with long-wavelength radiation

    NASA Astrophysics Data System (ADS)

    Murgia, David; Weidt, Sebastian; Randall, Joseph; Lekitsch, Bjoern; Webster, Simon; Navickas, Tomas; Grounds, Anton; Rodriguez, Andrea; Webb, Anna; Standing, Eamon; Pearce, Stuart; Sari, Ibrahim; Kiang, Kian; Rattanasonti, Hwanjit; Kraft, Michael; Hensinger, Winfried

    To this point, the entanglement of ions has predominantly been performed using lasers. Using long wavelength radiation with static magnetic field gradients provides an architecture to simplify construction of a large scale quantum computer. The use of microwave-dressed states protects against decoherence from fluctuating magnetic fields, with radio-frequency fields used for qubit manipulation. I will report the realisation of spin-motion entanglement using long-wavelength radiation, and a new method to efficiently prepare dressed-state qubits and qutrits, reducing experimental complexity of gate operations. I will also report demonstration of ground state cooling using long wavelength radiation, which may increase two-qubit entanglement fidelity. I will then report demonstration of a high-fidelity long-wavelength two-ion quantum gate using dressed states. Combining these results with microfabricated ion traps allows for scaling towards a large scale ion trap quantum computer, and provides a platform for quantum simulations of fundamental physics. I will report progress towards the operation of microchip ion traps with extremely high magnetic field gradients for multi-ion quantum gates.

  20. Cosmic microwave background trispectrum and primordial magnetic field limits.

    PubMed

    Trivedi, Pranjal; Seshadri, T R; Subramanian, Kandaswamy

    2012-06-08

    Primordial magnetic fields will generate non-gaussian signals in the cosmic microwave background (CMB) as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. We compute a new measure of magnetic non-gaussianity, the CMB trispectrum, on large angular scales, sourced via the Sachs-Wolfe effect. The trispectra induced by magnetic energy density and by magnetic scalar anisotropic stress are found to have typical magnitudes of approximately a few times 10(-29) and 10(-19), respectively. Observational limits on CMB non-gaussianity from WMAP data allow us to conservatively set upper limits of a nG, and plausibly sub-nG, on the present value of the primordial cosmic magnetic field. This represents the tightest limit so far on the strength of primordial magnetic fields, on Mpc scales, and is better than limits from the CMB bispectrum and all modes in the CMB power spectrum. Thus, the CMB trispectrum is a new and more sensitive probe of primordial magnetic fields on large scales.

  1. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  2. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  3. Large-scale magnetic topologies of mid M dwarfs

    NASA Astrophysics Data System (ADS)

    Morin, J.; Donati, J.-F.; Petit, P.; Delfosse, X.; Forveille, T.; Albert, L.; Aurière, M.; Cabanac, R.; Dintrans, B.; Fares, R.; Gastine, T.; Jardine, M. M.; Lignières, F.; Paletou, F.; Ramirez Velez, J. C.; Théado, S.

    2008-10-01

    We present in this paper, the first results of a spectropolarimetric analysis of a small sample (~20) of active stars ranging from spectral type M0 to M8, which are either fully convective or possess a very small radiative core. This study aims at providing new constraints on dynamo processes in fully convective stars. This paper focuses on five stars of spectral type ~M4, i.e. with masses close to the full convection threshold (~=0.35Msolar), and with short rotational periods. Tomographic imaging techniques allow us to reconstruct the surface magnetic topologies from the rotationally modulated time-series of circularly polarized profiles. We find that all stars host mainly axisymmetric large-scale poloidal fields. Three stars were observed at two different epochs separated by ~1 yr; we find the magnetic topologies to be globally stable on this time-scale. We also provide an accurate estimation of the rotational period of all stars, thus allowing us to start studying how rotation impacts the large-scale magnetic field. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) and the Télescope Bernard Lyot (TBL). CFHT is operated by the National Research Council of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique of France (INSU/CNRS) and the University of Hawaii, while the TBL is operated by CNRS/INSU. E-mail: jmorin@ast.obs-mip.fr (JM); donati@ast.obs-mip.fr (J-FD); petit@ast.obs-mip.fr (PP); xavier.delfosse@obs.ujf-grenoble.fr (XD); thierry.forveille@obs.ujf-grenoble.fr (TF); albert@cfht.hawaii.edu (LA); auriere@ast.obs-mip.fr (MA); remi.cabanac@ast.obs-mip.fr (RC); dintrans@ast.obs-mip.fr (BD); rfares@ast.obs-mip.fr (RF); tgastine@ast.obs-mip.fr (TG); mmj@st-andrews.ac.uk (MMJ); ligniere@ast.obs-mip.fr (FL); fpaletou@ast.obs-mip.fr (FP); jramirez@mesiog.obspm.fr (JR); sylvie.theado@ast.obs-mip.fr (ST)

  4. Identifying the Threshold of Dominant Controls on Fire Spread in a Boreal Forest Landscape of Northeast China

    PubMed Central

    Liu, Zhihua; Yang, Jian; He, Hong S.

    2013-01-01

    The relative importance of fuel, topography, and weather on fire spread varies at different spatial scales, but how the relative importance of these controls respond to changing spatial scales is poorly understood. We designed a “moving window” resampling technique that allowed us to quantify the relative importance of controls on fire spread at continuous spatial scales using boosted regression trees methods. This quantification allowed us to identify the threshold value for fire size at which the dominant control switches from fuel at small sizes to weather at large sizes. Topography had a fluctuating effect on fire spread across the spatial scales, explaining 20–30% of relative importance. With increasing fire size, the dominant control switched from bottom-up controls (fuel and topography) to top-down controls (weather). Our analysis suggested that there is a threshold for fire size, above which fires are driven primarily by weather and more likely lead to larger fire size. We suggest that this threshold, which may be ecosystem-specific, can be identified using our “moving window” resampling technique. Although the threshold derived from this analytical method may rely heavily on the sampling technique, our study introduced an easily implemented approach to identify scale thresholds in wildfire regimes. PMID:23383247

  5. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  6. EFT of large scale structures in redshift space [On the EFT of large scale structures in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco

    Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less

  7. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    PubMed

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. EFT of large scale structures in redshift space [On the EFT of large scale structures in redshift space

    DOE PAGES

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; ...

    2018-03-15

    Here, we further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ = 6. We find that the IR resummation allows us to correctly reproduce the baryonmore » acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k—depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z = 0.56 and up to ℓ = 2 matches the data at the percent level approximately up to k~0.13 hMpc –1 or k~0.18 hMpc –1, depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.« less

  9. Environmental impact assessment and environmental audit in large-scale public infrastructure construction: the case of the Qinghai-Tibet Railway.

    PubMed

    He, Guizhen; Zhang, Lei; Lu, Yonglong

    2009-09-01

    Large-scale public infrastructure projects have featured in China's modernization course since the early 1980s. During the early stages of China's rapid economic development, public attention focused on the economic and social impact of high-profile construction projects. In recent years, however, we have seen a shift in public concern toward the environmental and ecological effects of such projects, and today governments are required to provide valid environmental impact assessments prior to allowing large-scale construction. The official requirement for the monitoring of environmental conditions has led to an increased number of debates in recent years regarding the effectiveness of Environmental Impact Assessments (EIAs) and Governmental Environmental Audits (GEAs) as environmental safeguards in instances of large-scale construction. Although EIA and GEA are conducted by different institutions and have different goals and enforcement potential, these two practices can be closely related in terms of methodology. This article cites the construction of the Qinghai-Tibet Railway as an instance in which EIA and GEA offer complementary approaches to environmental impact management. This study concludes that the GEA approach can serve as an effective follow-up to the EIA and establishes that the EIA lays a base for conducting future GEAs. The relationship that emerges through a study of the Railway's construction calls for more deliberate institutional arrangements and cooperation if the two practices are to be used in concert to optimal effect.

  10. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  11. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  12. WKB theory of large deviations in stochastic populations

    NASA Astrophysics Data System (ADS)

    Assaf, Michael; Meerson, Baruch

    2017-06-01

    Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.

  13. Markets for Clean Air

    NASA Astrophysics Data System (ADS)

    Ellerman, A. Denny; Joskow, Paul L.; Schmalensee, Richard; Montero, Juan-Pablo; Bailey, Elizabeth M.

    2000-06-01

    Markets for Clean Air provides a comprehensive, in-depth description and evaluation of the first three years' experience with the U.S. Acid Rain Program. This environmental control program is the world's first large-scale use of a tradable emission permit system for achieving environmental goals. The book analyzes the behavior and performance of the market for emissions permits, called allowances in the Acid Rain Program, and quantifies emission reductions, compliance costs, and cost savings associated with the trading program. The book also includes chapters on the historical context in which this pioneering program developed and the political economy of allowance allocations.

  14. Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis

    PubMed Central

    2012-01-01

    Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand. PMID:22276739

  15. Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis.

    PubMed

    Tu, Jing; Ge, Qinyu; Wang, Shengqin; Wang, Lei; Sun, Beili; Yang, Qi; Bai, Yunfei; Lu, Zuhong

    2012-01-25

    The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.

  16. Exploring the role of movement in determining the global distribution of marine biomass using a coupled hydrodynamic - Size-based ecosystem model

    NASA Astrophysics Data System (ADS)

    Watson, James R.; Stock, Charles A.; Sarmiento, Jorge L.

    2015-11-01

    Modeling the dynamics of marine populations at a global scale - from phytoplankton to fish - is necessary if we are to quantify how climate change and other broad-scale anthropogenic actions affect the supply of marine-based food. Here, we estimate the abundance and distribution of fish biomass using a simple size-based food web model coupled to simulations of global ocean physics and biogeochemistry. We focus on the spatial distribution of biomass, identifying highly productive regions - shelf seas, western boundary currents and major upwelling zones. In the absence of fishing, we estimate the total ocean fish biomass to be ∼ 2.84 ×109 tonnes, similar to previous estimates. However, this value is sensitive to the choice of parameters, and further, allowing fish to move had a profound impact on the spatial distribution of fish biomass and the structure of marine communities. In particular, when movement is implemented the viable range of large predators is greatly increased, and stunted biomass spectra characterizing large ocean regions in simulations without movement, are replaced with expanded spectra that include large predators. These results highlight the importance of considering movement in global-scale ecological models.

  17. A linear concatenation strategy to construct 5'-enriched amplified cDNA libraries using multiple displacement amplification.

    PubMed

    Gadkar, Vijay J; Filion, Martin

    2013-06-01

    In various experimental systems, limiting available amounts of RNA may prevent a researcher from performing large-scale analyses of gene transcripts. One way to circumvent this is to 'pre-amplify' the starting RNA/cDNA, so that sufficient amounts are available for any downstream analysis. In the present study, we report the development of a novel protocol for constructing amplified cDNA libraries using the Phi29 DNA polymerase based multiple displacement amplification (MDA) system. Using as little as 200 ng of total RNA, we developed a linear concatenation strategy to make the single-stranded cDNA template amenable for MDA. The concatenation, made possible by the template switching property of the reverse transcriptase enzyme, resulted in the amplified cDNA library with intact 5' ends. MDA generated micrograms of template, allowing large-scale polymerase chain reaction analyses or other large-scale downstream applications. As the amplified cDNA library contains intact 5' ends, it is also compatible with 5' RACE analyses of specific gene transcripts. Empirical validation of this protocol is demonstrated on a highly characterized (tomato) and an uncharacterized (corn gromwell) experimental system.

  18. Applications of species accumulation curves in large-scale biological data analysis.

    PubMed

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2015-09-01

    The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.

  19. Applications of species accumulation curves in large-scale biological data analysis

    PubMed Central

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2016-01-01

    The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899

  20. Polymer Dynamics from Synthetic to Biological Macromolecules

    NASA Astrophysics Data System (ADS)

    Richter, D.; Niedzwiedz, K.; Monkenbusch, M.; Wischnewski, A.; Biehl, R.; Hoffmann, B.; Merkel, R.

    2008-02-01

    High resolution neutron scattering together with a meticulous choice of the contrast conditions allows to access the large scale dynamics of soft materials including biological molecules in space and time. In this contribution we present two examples. One from the world of synthetic polymers, the other from biomolecules. First, we will address the peculiar dynamics of miscible polymer blends with very different component glass transition temperatures. Polymethylmetacrylate (PMMA), polyethyleneoxide (PEO) are perfectly miscible but exhibit a difference in the glass transition temperature by 200 K. We present quasielastic neutron scattering investigations on the dynamics of the fast component in the range from angströms to nanometers over a time frame of five orders of magnitude. All data may be consistently described in terms of a Rouse model with random friction, reflecting the random environment imposed by the nearly frozen PMMA matrix on the fast mobile PEO. In the second part we touch on some new developments relating to large scale internal dynamics of proteins by neutron spin echo. We will report results of some pioneering studies which show the feasibility of such experiments on large scale protein motion which will most likely initiate further studies in the future.

  1. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  2. Single field double inflation and primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannike, K.; Marzola, L.; Raidal, M.

    Within the framework of scalar-tensor theories, we study the conditions that allow single field inflation dynamics on small cosmological scales to significantly differ from that of the large scales probed by the observations of cosmic microwave background. The resulting single field double inflation scenario is characterised by two consequent inflation eras, usually separated by a period where the slow-roll approximation fails. At large field values the dynamics of the inflaton is dominated by the interplay between its non-minimal coupling to gravity and the radiative corrections to the inflaton self-coupling. For small field values the potential is, instead, dominated by amore » polynomial that results in a hilltop inflation. Without relying on the slow-roll approximation, which is invalidated by the appearance of the intermediate stage, we propose a concrete model that matches the current measurements of inflationary observables and employs the freedom granted by the framework on small cosmological scales to give rise to a sizeable population of primordial black holes generated by large curvature fluctuations. We find that these features generally require a potential with a local minimum. We show that the associated primordial black hole mass function is only approximately lognormal.« less

  3. Large-Scale Production of Nanographite by Tube-Shear Exfoliation in Water

    PubMed Central

    Engström, Ann-Christine; Hummelgård, Magnus; Andres, Britta; Forsberg, Sven; Olin, Håkan

    2016-01-01

    The number of applications based on graphene, few-layer graphene, and nanographite is rapidly increasing. A large-scale process for production of these materials is critically needed to achieve cost-effective commercial products. Here, we present a novel process to mechanically exfoliate industrial quantities of nanographite from graphite in an aqueous environment with low energy consumption and at controlled shear conditions. This process, based on hydrodynamic tube shearing, produced nanometer-thick and micrometer-wide flakes of nanographite with a production rate exceeding 500 gh-1 with an energy consumption about 10 Whg-1. In addition, to facilitate large-area coating, we show that the nanographite can be mixed with nanofibrillated cellulose in the process to form highly conductive, robust and environmentally friendly composites. This composite has a sheet resistance below 1.75 Ω/sq and an electrical resistivity of 1.39×10-4 Ωm and may find use in several applications, from supercapacitors and batteries to printed electronics and solar cells. A batch of 100 liter was processed in less than 4 hours. The design of the process allow scaling to even larger volumes and the low energy consumption indicates a low-cost process. PMID:27128841

  4. EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.

    PubMed

    Tal-Ezer, Hillel

    2016-05-19

    Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.

  5. MOOSE: A PARALLEL COMPUTATIONAL FRAMEWORK FOR COUPLED SYSTEMS OF NONLINEAR EQUATIONS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. Hansen; C. Newman; D. Gaston

    Systems of coupled, nonlinear partial di?erential equations often arise in sim- ulation of nuclear processes. MOOSE: Multiphysics Ob ject Oriented Simulation Environment, a parallel computational framework targeted at solving these systems is presented. As opposed to traditional data / ?ow oriented com- putational frameworks, MOOSE is instead founded on mathematics based on Jacobian-free Newton Krylov (JFNK). Utilizing the mathematical structure present in JFNK, physics are modularized into “Kernels” allowing for rapid production of new simulation tools. In addition, systems are solved fully cou- pled and fully implicit employing physics based preconditioning allowing for a large amount of ?exibility even withmore » large variance in time scales. Background on the mathematics, an inspection of the structure of MOOSE and several rep- resentative solutions from applications built on the framework are presented.« less

  6. Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widlund, Olof B.

    2015-06-09

    The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less

  7. The Use of the Time Average Visibility for Analyzing HERA-19 Commissioning Data

    NASA Astrophysics Data System (ADS)

    Gallardo, Samavarti; Benefo, Roshan; La Plante, Paul; Aguirre, James; HERA Collaboration

    2018-01-01

    The Hydrogen Epoch of Reionization Array (HERA) is a radio telescope that will be observing large structure throughout the cosmic reionzation epoch. This will allow us to characterize the evolution of the 21 cm power spectrum to constrain the timing and morphology of reionization, the properties of the first galaxies, the evolution of large-scale structure, and the early sources of heating. We develop a simple and robust observable for the HERA-19 commissioning data, the Time Average Visibility (TAV). We compare both redundantly and absolutely calibrated visibilities to detailed instrument simulations and to analytical expectations, and explore the signal present in the TAV. The TAV has already been demonstrated as a method to reject poorly performing antennas, and may be improved with this work to allow a simple cross-check of the calibration solutions without imaging.

  8. Large Aperture Camera for the Simon's Observatory

    NASA Astrophysics Data System (ADS)

    Dicker, Simon; Simons Observatory Collaboration

    2018-01-01

    The Simon's observatory will consist of one large 6m telescope and three or more smaller telescopes working together with a goal of measuring the polarization in the Cosmic Microwave Background on angular scales as small as 1' to larger than 1 degree and at a sensitivity far greater than has ever been reached before. To reach these sensitivities, needed for our science goals, we require over 90000 background limited TES detectors on the large telescope - hence a very large field-of-view. The telescope design we have selected is a copy of the CCAT-prime telescope, a Crossed Dragone with extra aspheric terms to increase the diffraction limited field-of-view. At the secondary focus will be a 2.5m diameter cryostat containing re-imaging silicon optics which can correct remaining aberrations (mostly astigmatism) at the edge of the field of view and allow this part of the focal plane to be used at higher frequencies. This poster will contain an outline of our optical designs and take a brief look at how they could be scaled to a larger telescope.

  9. ISMIP6 - initMIP: Greenland ice sheet model initialisation experiments

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Payne, Tony; Larour, Eric; Abe Ouchi, Ayako; Gregory, Jonathan; Lipscomb, William; Seroussi, Helene; Shepherd, Andrew; Edwards, Tamsin

    2016-04-01

    Earlier large-scale Greenland ice sheet sea-level projections e.g. those run during ice2sea and SeaRISE initiatives have shown that ice sheet initialisation can have a large effect on the projections and gives rise to important uncertainties. This intercomparison exercise (initMIP) aims at comparing, evaluating and improving the initialization techniques used in the ice sheet modeling community and to estimate the associated uncertainties. It is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). The experiments are conceived for the large-scale Greenland ice sheet and are designed to allow intercomparison between participating models of 1) the initial present-day state of the ice sheet and 2) the response in two schematic forward experiments. The latter experiments serve to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss first results of the intercomparison and highlight important uncertainties with respect to projections of the Greenland ice sheet sea-level contribution.

  10. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  11. Overcoming potential energy distortions in constrained internal coordinate molecular dynamics simulations.

    PubMed

    Kandel, Saugat; Salomon-Ferrer, Romelia; Larsen, Adrien B; Jain, Abhinandan; Vaidehi, Nagarajan

    2016-01-28

    The Internal Coordinate Molecular Dynamics (ICMD) method is an attractive molecular dynamics (MD) method for studying the dynamics of bonded systems such as proteins and polymers. It offers a simple venue for coarsening the dynamics model of a system at multiple hierarchical levels. For example, large scale protein dynamics can be studied using torsional dynamics, where large domains or helical structures can be treated as rigid bodies and the loops connecting them as flexible torsions. ICMD with such a dynamic model of the protein, combined with enhanced conformational sampling method such as temperature replica exchange, allows the sampling of large scale domain motion involving high energy barrier transitions. Once these large scale conformational transitions are sampled, all-torsion, or even all-atom, MD simulations can be carried out for the low energy conformations sampled via coarse grained ICMD to calculate the energetics of distinct conformations. Such hierarchical MD simulations can be carried out with standard all-atom forcefields without the need for compromising on the accuracy of the forces. Using constraints to treat bond lengths and bond angles as rigid can, however, distort the potential energy landscape of the system and reduce the number of dihedral transitions as well as conformational sampling. We present here a two-part solution to overcome such distortions of the potential energy landscape with ICMD models. To alleviate the intrinsic distortion that stems from the reduced phase space in torsional MD, we use the Fixman compensating potential. To additionally alleviate the extrinsic distortion that arises from the coupling between the dihedral angles and bond angles within a force field, we propose a hybrid ICMD method that allows the selective relaxing of bond angles. This hybrid ICMD method bridges the gap between all-atom MD and torsional MD. We demonstrate with examples that these methods together offer a solution to eliminate the potential energy distortions encountered in constrained ICMD simulations of peptide molecules.

  12. Overcoming potential energy distortions in constrained internal coordinate molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Kandel, Saugat; Salomon-Ferrer, Romelia; Larsen, Adrien B.; Jain, Abhinandan; Vaidehi, Nagarajan

    2016-01-01

    The Internal Coordinate Molecular Dynamics (ICMD) method is an attractive molecular dynamics (MD) method for studying the dynamics of bonded systems such as proteins and polymers. It offers a simple venue for coarsening the dynamics model of a system at multiple hierarchical levels. For example, large scale protein dynamics can be studied using torsional dynamics, where large domains or helical structures can be treated as rigid bodies and the loops connecting them as flexible torsions. ICMD with such a dynamic model of the protein, combined with enhanced conformational sampling method such as temperature replica exchange, allows the sampling of large scale domain motion involving high energy barrier transitions. Once these large scale conformational transitions are sampled, all-torsion, or even all-atom, MD simulations can be carried out for the low energy conformations sampled via coarse grained ICMD to calculate the energetics of distinct conformations. Such hierarchical MD simulations can be carried out with standard all-atom forcefields without the need for compromising on the accuracy of the forces. Using constraints to treat bond lengths and bond angles as rigid can, however, distort the potential energy landscape of the system and reduce the number of dihedral transitions as well as conformational sampling. We present here a two-part solution to overcome such distortions of the potential energy landscape with ICMD models. To alleviate the intrinsic distortion that stems from the reduced phase space in torsional MD, we use the Fixman compensating potential. To additionally alleviate the extrinsic distortion that arises from the coupling between the dihedral angles and bond angles within a force field, we propose a hybrid ICMD method that allows the selective relaxing of bond angles. This hybrid ICMD method bridges the gap between all-atom MD and torsional MD. We demonstrate with examples that these methods together offer a solution to eliminate the potential energy distortions encountered in constrained ICMD simulations of peptide molecules.

  13. Blueprints for green biotech: development and application of standards for plant synthetic biology.

    PubMed

    Patron, Nicola J

    2016-06-15

    Synthetic biology aims to apply engineering principles to the design and modification of biological systems and to the construction of biological parts and devices. The ability to programme cells by providing new instructions written in DNA is a foundational technology of the field. Large-scale de novo DNA synthesis has accelerated synthetic biology by offering custom-made molecules at ever decreasing costs. However, for large fragments and for experiments in which libraries of DNA sequences are assembled in different combinations, assembly in the laboratory is still desirable. Biological assembly standards allow DNA parts, even those from multiple laboratories and experiments, to be assembled together using the same reagents and protocols. The adoption of such standards for plant synthetic biology has been cohesive for the plant science community, facilitating the application of genome editing technologies to plant systems and streamlining progress in large-scale, multi-laboratory bioengineering projects. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  14. Population size effects in evolutionary dynamics on neutral networks and toy landscapes

    NASA Astrophysics Data System (ADS)

    Sumedha; Martin, Olivier C.; Peliti, Luca

    2007-05-01

    We study the dynamics of a population subject to selective pressures, evolving either on RNA neutral networks or on toy fitness landscapes. We discuss the spread and the neutrality of the population in the steady state. Different limits arise depending on whether selection or random drift is dominant. In the presence of strong drift we show that the observables depend mainly on Mμ, M being the population size and μ the mutation rate, while corrections to this scaling go as 1/M: such corrections can be quite large in the presence of selection if there are barriers in the fitness landscape. Also we find that the convergence to the large-Mμ limit is linear in 1/Mμ. Finally we introduce a protocol that minimizes drift; then observables scale like 1/M rather than 1/(Mμ), allowing one to determine the large-M limit more quickly when μ is small; furthermore the genotypic diversity increases from O(lnM) to O(M).

  15. Development and Application of the Collaborative Optimization Architecture in a Multidisciplinary Design Environment

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Kroo, I. M.

    1995-01-01

    Collaborative optimization is a design architecture applicable in any multidisciplinary analysis environment but specifically intended for large-scale distributed analysis applications. In this approach, a complex problem is hierarchically de- composed along disciplinary boundaries into a number of subproblems which are brought into multidisciplinary agreement by a system-level coordination process. When applied to problems in a multidisciplinary design environment, this scheme has several advantages over traditional solution strategies. These advantageous features include reducing the amount of information transferred between disciplines, the removal of large iteration-loops, allowing the use of different subspace optimizers among the various analysis groups, an analysis framework which is easily parallelized and can operate on heterogenous equipment, and a structural framework that is well-suited for conventional disciplinary organizations. In this article, the collaborative architecture is developed and its mathematical foundation is presented. An example application is also presented which highlights the potential of this method for use in large-scale design applications.

  16. Advances in DNA sequencing technologies for high resolution HLA typing.

    PubMed

    Cereb, Nezih; Kim, Hwa Ran; Ryu, Jaejun; Yang, Soo Young

    2015-12-01

    This communication describes our experience in large-scale G group-level high resolution HLA typing using three different DNA sequencing platforms - ABI 3730 xl, Illumina MiSeq and PacBio RS II. Recent advances in DNA sequencing technologies, so-called next generation sequencing (NGS), have brought breakthroughs in deciphering the genetic information in all living species at a large scale and at an affordable level. The NGS DNA indexing system allows sequencing multiple genes for large number of individuals in a single run. Our laboratory has adopted and used these technologies for HLA molecular testing services. We found that each sequencing technology has its own strengths and weaknesses, and their sequencing performances complement each other. HLA genes are highly complex and genotyping them is quite challenging. Using these three sequencing platforms, we were able to meet all requirements for G group-level high resolution and high volume HLA typing. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  17. TheCellMap.org: A Web-Accessible Database for Visualizing and Mining the Global Yeast Genetic Interaction Network

    PubMed Central

    Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L.; Costanzo, Michael; Andrews, Brenda; Boone, Charles

    2017-01-01

    Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae. In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. PMID:28325812

  18. Fluid limit of nonintegrable continuous-time random walks in terms of fractional differential equations.

    PubMed

    Sánchez, R; Carreras, B A; van Milligen, B Ph

    2005-01-01

    The fluid limit of a recently introduced family of nonintegrable (nonlinear) continuous-time random walks is derived in terms of fractional differential equations. In this limit, it is shown that the formalism allows for the modeling of the interaction between multiple transport mechanisms with not only disparate spatial scales but also different temporal scales. For this reason, the resulting fluid equations may find application in the study of a large number of nonlinear multiscale transport problems, ranging from the study of self-organized criticality to the modeling of turbulent transport in fluids and plasmas.

  19. TheCellMap.org: A Web-Accessible Database for Visualizing and Mining the Global Yeast Genetic Interaction Network.

    PubMed

    Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L; Costanzo, Michael; Andrews, Brenda; Boone, Charles

    2017-05-05

    Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. Copyright © 2017 Usaj et al.

  20. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  1. Models for the rise of the dinosaurs.

    PubMed

    Benton, Michael J; Forth, Jonathan; Langer, Max C

    2014-01-20

    Dinosaurs arose in the early Triassic in the aftermath of the greatest mass extinction ever and became hugely successful in the Mesozoic. Their initial diversification is a classic example of a large-scale macroevolutionary change. Diversifications at such deep-time scales can now be dissected, modelled and tested. New fossils suggest that dinosaurs originated early in the Middle Triassic, during the recovery of life from the devastating Permo-Triassic mass extinction. Improvements in stratigraphic dating and a new suite of morphometric and comparative evolutionary numerical methods now allow a forensic dissection of one of the greatest turnovers in the history of life. Such studies mark a move from the narrative to the analytical in macroevolutionary research, and they allow us to begin to answer the proposal of George Gaylord Simpson, to explore adaptive radiations using numerical methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Large-scale Distribution of Arrival Directions of Cosmic Rays Detected Above 1018 eV at the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antiči'c, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Badescu, A. M.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Buroker, L.; Burton, R. E.; Caballero-Mora, K. S.; Caccianiga, B.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chirinos Diaz, J.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; del Peral, L.; del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gemmeke, H.; Ghia, P. L.; Giller, M.; Gitto, J.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gouffon, P.; Grashorn, E.; Grebe, S.; Griffith, N.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jansen, S.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; LaHurd, D.; Latronico, L.; Lauer, R.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Messina, S.; Meurer, C.; Meyhandan, R.; Mi'canovi'c, S.; Micheletti, M. I.; Minaya, I. A.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niechciol, M.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Oehlschläger, J.; Olinto, A.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Peķala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrolini, A.; Petrov, Y.; Pfendner, C.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Ponce, V. H.; Pontz, M.; Porcelli, A.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Cabo, I.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saftoiu, A.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Sima, O.; 'Smiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tapia, A.; Tartare, M.; Taşcău, O.; Tcaciuc, R.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Westerhoff, S.; Whelan, B. J.; Widom, A.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano Garcia, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.

    2012-12-01

    A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 1018 eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 1018 eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 1018 eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.

  3. Flexible Redistribution in Cognitive Networks.

    PubMed

    Hartwigsen, Gesa

    2018-06-15

    Previous work has emphasized that cognitive functions in the human brain are organized into large-scale networks. However, the mechanisms that allow these networks to compensate for focal disruptions remain elusive. I suggest a new perspective on the compensatory flexibility of cognitive networks. First, I demonstrate that cognitive networks can rapidly change the functional weight of the relative contribution of different regions. Second, I argue that there is an asymmetry in the compensatory potential of different kinds of networks. Specifically, recruitment of domain-general functions can partially compensate for focal disruptions of specialized cognitive functions, but not vice versa. Considering the compensatory potential within and across networks will increase our understanding of functional adaptation and reorganization after brain lesions and offers a new perspective on large-scale neural network (re-)organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Modeling veterans healthcare administration disclosure processes :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyeler, Walter E; DeMenno, Mercy B.; Finley, Patrick D.

    As with other large healthcare organizations, medical adverse events at the Department of Veterans Affairs (VA) facilities can expose patients to unforeseen negative risks. VHA leadership recognizes that properly handled disclosure of adverse events can minimize potential harm to patients and negative consequences for the effective functioning of the organization. The work documented here seeks to help improve the disclosure process by situating it within the broader theoretical framework of issues management, and to identify opportunities for process improvement through modeling disclosure and reactions to disclosure. The computational model will allow a variety of disclosure actions to be tested acrossmore » a range of incident scenarios. Our conceptual model will be refined in collaboration with domain experts, especially by continuing to draw on insights from VA Study of the Communication of Adverse Large-Scale Events (SCALE) project researchers.« less

  5. The Crotone Megalandslide, southern Italy: Architecture, timing and tectonic control.

    PubMed

    Zecchin, Massimo; Accaino, Flavio; Ceramicola, Silvia; Civile, Dario; Critelli, Salvatore; Da Lio, Cristina; Mangano, Giacomo; Prosser, Giacomo; Teatini, Pietro; Tosi, Luigi

    2018-05-17

    Large-scale submarine gravitational land movements involving even more than 1,000 m thick sedimentary successions are known as megalandslides. We prove the existence of large-scale gravitational phenomena off the Crotone Basin, a forearc basin located on the Ionian side of Calabria (southern Italy), by seismic, morpho-bathymetric and well data. Our study reveals that the Crotone Megalandslide started moving between Late Zanclean and Early Piacenzian and was triggered by a contractional tectonic event leading to the basin inversion. Seaward gliding of the megalandslide continued until roughly Late Gelasian, and then resumed since Middle Pleistocene with a modest rate. Interestingly, the onshore part of the basin does not show a gravity-driven deformation comparable to that observed in the marine area, and this peculiar evidence allows some speculations on the origin of the megalandslide.

  6. Studies of the cosmic ray spectrum and large scale anisotropies with the KASCADE-Grande experiment

    NASA Astrophysics Data System (ADS)

    Chiavassa, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Cossavella, F.; Curcio, C.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.

    2014-08-01

    KASCADE-Grande is an air shower observatory devoted to the detection of cosmic rays in the 1016 - 1018eV energy range. For each event the arrival direction, the total number of charged particles (Nch) and the total number of muons (Nμ), at detection level (i.e. 110 m a.s.l.), are measured. The detection of these observarbles, with high accuracy, allows the study of the primary spectrum, chemical composition and large scale anisotropies, that are the relevant informations to investigate the astrophysics of cosmic rays in this energy range. These studies are of main importance to deeply investigate the change of slope of the primary spectrum detected at ~ 4 × 1015 eV, also known as the knee, and to search for the transition from galactic to extra-galactic cosmic rays.

  7. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  8. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  9. Inferring field-scale properties of a fractured aquifer from ground surface deformation during a well test

    NASA Astrophysics Data System (ADS)

    Schuite, Jonathan; Longuevergne, Laurent; Bour, Olivier; Boudin, Frédérick; Durand, Stéphane; Lavenant, Nicolas

    2015-12-01

    Fractured aquifers which bear valuable water resources are often difficult to characterize with classical hydrogeological tools due to their intrinsic heterogeneities. Here we implement ground surface deformation tools (tiltmetry and optical leveling) to monitor groundwater pressure changes induced by a classical hydraulic test at the Ploemeur observatory. By jointly analyzing complementary time constraining data (tilt) and spatially constraining data (vertical displacement), our results strongly suggest that the use of these surface deformation observations allows for estimating storativity and structural properties (dip, root depth, and lateral extension) of a large hydraulically active fracture, in good agreement with previous studies. Hence, we demonstrate that ground surface deformation is a useful addition to traditional hydrogeological techniques and opens possibilities for characterizing important large-scale properties of fractured aquifers with short-term well tests as a controlled forcing.

  10. Large-eddy simulation of turbulent flow with a surface-mounted two-dimensional obstacle

    NASA Technical Reports Server (NTRS)

    Yang, Kyung-Soo; Ferziger, Joel H.

    1993-01-01

    In this paper, we perform a large eddy simulation (LES) of turbulent flow in a channel containing a two-dimensional obstacle on one wall using a dynamic subgrid-scale model (DSGSM) at Re = 3210, based on bulk velocity above the obstacle and obstacle height; the wall layers are fully resolved. The low Re enables us to perform a DNS (Case 1) against which to validate the LES results. The LES with the DSGSM is designated Case 2. In addition, an LES with the conventional fixed model constant (Case 3) is conducted to allow identification of improvements due to the DSGSM. We also include LES at Re = 82,000 (Case 4) using conventional Smagorinsky subgrid-scale model and a wall-layer model. The results will be compared with the experiment of Dimaczek et al.

  11. Spatial and temporal variance in fatty acid and stable isotope signatures across trophic levels in large river systems

    USGS Publications Warehouse

    Fritts, Andrea; Knights, Brent C.; Lafrancois, Toben D.; Bartsch, Lynn; Vallazza, Jon; Bartsch, Michelle; Richardson, William B.; Karns, Byron N.; Bailey, Sean; Kreiling, Rebecca

    2018-01-01

    Fatty acid and stable isotope signatures allow researchers to better understand food webs, food sources, and trophic relationships. Research in marine and lentic systems has indicated that the variance of these biomarkers can exhibit substantial differences across spatial and temporal scales, but this type of analysis has not been completed for large river systems. Our objectives were to evaluate variance structures for fatty acids and stable isotopes (i.e. δ13C and δ15N) of seston, threeridge mussels, hydropsychid caddisflies, gizzard shad, and bluegill across spatial scales (10s-100s km) in large rivers of the Upper Mississippi River Basin, USA that were sampled annually for two years, and to evaluate the implications of this variance on the design and interpretation of trophic studies. The highest variance for both isotopes was present at the largest spatial scale for all taxa (except seston δ15N) indicating that these isotopic signatures are responding to factors at a larger geographic level rather than being influenced by local-scale alterations. Conversely, the highest variance for fatty acids was present at the smallest spatial scale (i.e. among individuals) for all taxa except caddisflies, indicating that the physiological and metabolic processes that influence fatty acid profiles can differ substantially between individuals at a given site. Our results highlight the need to consider the spatial partitioning of variance during sample design and analysis, as some taxa may not be suitable to assess ecological questions at larger spatial scales.

  12. Prospects of Detecting HI using Redshifted 21-cm Radiation at z˜3

    NASA Astrophysics Data System (ADS)

    Gehlot, Bharat Kumar; Bagla, J. S.

    2017-03-01

    Distribution of cold gas in the post-reionization era provides an important link between distribution of galaxies and the process of star formation. Redshifted 21-cm radiation from the hyperfine transition of neutral hydrogen allows us to probe the neutral component of cold gas, most of which is to be found in the interstellar medium of galaxies. Existing and upcoming radio telescopes can probe the large scale distribution of neutral hydrogen via HI intensity mapping. In this paper, we use an estimate of the HI power spectrum derived using an ansatz to compute the expected signal from the large scale HI distribution at z˜3. We find that the scale dependence of bias at small scales makes a significant difference to the expected signal even at large angular scales. We compare the predicted signal strength with the sensitivity of radio telescopes that can observe such radiation and calculate the observation time required for detecting neutral hydrogen at these redshifts. We find that OWFA (Ooty Wide Field Array) offers the best possibility to detect neutral hydrogen at z˜3 before the SKA (Square Kilometer Array) becomes operational. We find that the OWFA should be able to make a 3 σ or a more significant detection in 2000 hours of observations at several angular scales. Calculations done using the Fisher matrix approach indicate that a 5 σ detection of the binned HI power spectrum via measurement of the amplitude of the HI power spectrum is possible in 1000 h (Sarkar et al. 2017).

  13. Large-Scale Geographic Variation in Distribution and Abundance of Australian Deep-Water Kelp Forests

    PubMed Central

    Marzinelli, Ezequiel M.; Williams, Stefan B.; Babcock, Russell C.; Barrett, Neville S.; Johnson, Craig R.; Jordan, Alan; Kendrick, Gary A.; Pizarro, Oscar R.; Smale, Dan A.; Steinberg, Peter D.

    2015-01-01

    Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia’s Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10–100 m to 100–1,000 km) and depths (15–60 m) across several regions ca 2–6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40–50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves. PMID:25693066

  14. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  15. Remote third shift EAST operation: a new paradigm

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.; Coviello, E.; Eidietis, N.; Flanagan, S.; Garcia, F.; Humphreys, D.; Kostuk, M.; Lanctot, M.; Lee, X.; Margo, M.; Miller, D.; Parker, C.; Penaflor, B.; Qian, J. P.; Sun, X.; Tan, H.; Walker, M.; Xiao, B.; Yuan, Q.

    2017-05-01

    General Atomics’ (GA) scientists in the United States remotely conducted experimental operation of the experimental advanced superconducting tokamak (EAST) in China during its third shift. Scientists led these experiments in a dedicated remote control room that utilized a novel computer science hardware and software infrastructure to allow data movement, visualization, and communication on the time scale of EAST’s pulse cycle. This Fusion Science Collaboration Zone infrastructure allows the movement of large amounts of data between continents in a short time scale with a 300-fold increase in data transfer rate over that available using the traditional transmission protocol. Real-time data from control systems is moved almost instantaneously. An event system tied to the EAST pulse cycle allows automatic initiation of data transfers, resulting in bulk EAST data to be transferred to GA within minutes. The EAST data at GA is served via MDSplus to approved US collaborators avoiding multiple US clients from requesting data from EAST and competing for the long-haul network’s bandwidth. At present there are 37 approved scientists from 8 US research institutions.

  16. Operation of an aquatic worm reactor suitable for sludge reduction at large scale.

    PubMed

    Hendrickx, Tim L G; Elissen, Hellen H J; Temmink, Hardy; Buisman, Cees J N

    2011-10-15

    Treatment of domestic waste water results in the production of waste sludge, which requires costly further processing. A biological method to reduce the amount of waste sludge and its volume is treatment in an aquatic worm reactor. The potential of such a worm reactor with the oligochaete Lumbriculus variegatus has been shown at small scale. For scaling up purposes, a new configuration of the reactor was designed, in which the worms were positioned horizontally in the carrier material. This was tested in a continuous experiment of 8 weeks where it treated all the waste sludge from a lab-scale activated sludge process. The results showed a higher worm growth rate compared to previous experiments with the old configuration, whilst nutrient release was similar. The new configuration has a low footprint and allows for easy aeration and faeces collection, thereby making it suitable for full scale application. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Evaluating scaling models in biology using hierarchical Bayesian approaches

    PubMed Central

    Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S

    2009-01-01

    Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621

  18. A stochastic two-scale model for pressure-driven flow between rough surfaces

    PubMed Central

    Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas

    2016-01-01

    Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975

  19. A robust quantitative near infrared modeling approach for blend monitoring.

    PubMed

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  20. Anticipatory Traumatic Reaction: Outcomes Arising From Secondary Exposure to Disasters and Large-Scale Threats.

    PubMed

    Hopwood, Tanya L; Schutte, Nicola S; Loi, Natasha M

    2017-09-01

    Two studies, with a total of 707 participants, developed and examined the reliability and validity of a measure for anticipatory traumatic reaction (ATR), a novel construct describing a form of distress that may occur in response to threat-related media reports and discussions. Exploratory and confirmatory factor analysis resulted in a scale comprising three subscales: feelings related to future threat; preparatory thoughts and actions; and disruption to daily activities. Internal consistency was .93 for the overall ATR scale. The ATR scale demonstrated convergent validity through associations with negative affect, depression, anxiety, stress, neuroticism, and repetitive negative thinking. The scale showed discriminant validity in relationships to Big Five characteristics. The ATR scale had some overlap with a measure of posttraumatic stress disorder, but also showed substantial separate variance. This research provides preliminary evidence for the novel construct of ATR as well as a measure of the construct. The ATR scale will allow researchers to further investigate anticipatory traumatic reaction in the fields of trauma, clinical practice, and social psychology.

Top