Science.gov

Sample records for large-scale correlation function

  1. On soft limits of large-scale structure correlation functions

    NASA Astrophysics Data System (ADS)

    Ben-Dayan, Ido; Konstandin, Thomas; Porto, Rafael A.; Sagunski, Laura

    2015-02-01

    We study soft limits of correlation functions for the density and velocity fields in the theory of structure formation. First, we re-derive the (resummed) consistency conditions at unequal times using the eikonal approximation. These are solely based on symmetry arguments and are therefore universal. Then, we explore the existence of equal-time relations in the soft limit which, on the other hand, depend on the interplay between soft and hard modes. We scrutinize two approaches in the literature: the time-flow formalism, and a background method where the soft mode is absorbed into a locally curved cosmology. The latter has been recently used to set up (angular averaged) `equal-time consistency relations'. We explicitly demonstrate that the time-flow relations and `equal-time consistency conditions' are only fulfilled at the linear level, and fail at next-to-leading order for an Einstein de-Sitter universe. While applied to the velocities both proposals break down beyond leading order, we find that the `equal-time consistency conditions' quantitatively approximates the perturbative results for the density contrast. Thus, we generalize the background method to properly incorporate the effect of curvature in the density and velocity fluctuations on short scales, and discuss the reasons behind this discrepancy. We conclude with a few comments on practical implementations and future directions.

  2. Large-scale 3D galaxy correlation function and non-Gaussianity

    SciTech Connect

    Raccanelli, Alvise; Doré, Olivier; Bertacca, Daniele; Maartens, Roy E-mail: daniele.bertacca@gmail.com E-mail: roy.maartens@gmail.com

    2014-08-01

    We investigate the properties of the 2-point galaxy correlation function at very large scales, including all geometric and local relativistic effects --- wide-angle effects, redshift space distortions, Doppler terms and Sachs-Wolfe type terms in the gravitational potentials. The general three-dimensional correlation function has a nonzero dipole and octupole, in addition to the even multipoles of the flat-sky limit. We study how corrections due to primordial non-Gaussianity and General Relativity affect the multipolar expansion, and we show that they are of similar magnitude (when f{sub NL} is small), so that a relativistic approach is needed. Furthermore, we look at how large-scale corrections depend on the model for the growth rate in the context of modified gravity, and we discuss how a modified growth can affect the non-Gaussian signal in the multipoles.

  3. Detection of the baryon acoustic peak in the large-scale correlation function of SDSS luminous red galaxies

    SciTech Connect

    Eisenstein, Daniel J.; Zehavi, Idit; Hogg, David W.; Scoccimarro, Roman; Blanton, Michael R.; Nichol, Robert C.; Scranton, Ryan; Seo, Hee-Jong; Tegmark, Max; Zheng, Zheng; Anderson, Scott F.; Annis, Jim; Bahcall, Neta; Brinkmann, Jon; Burles, Scott; Castander, Francisco J.; Connolly, Andrew; Csabai, Istvan; Doi, Mamoru; Fukugita, Masataka; Frieman, Joshua A.; /Arizona U., Astron. Dept. - Steward Observ. /CCPP, New York /Portsmouth U., ICG /Pittsburgh U. /Pennsylvania U. /MIT /Princeton, Inst. Advanced Study /Washington U., Seattle, Astron. Dept. /Fermilab /Princeton U. Observ. /Apache Point Observ. /Barcelona, IEEC /Eotvos U. /Tokyo U., Inst. Astron. /Tokyo U., ICRR /Chicago U., Astron. Astrophys. Ctr. /Johns Hopkins U. /Naval Observ., Flagstaff /Colorado U., CASA /Baltimore, Space Telescope Sci. /Michigan U.

    2005-01-01

    We present the large-scale correlation function measured from a spectroscopic sample of 46,748 luminous red galaxies from the Sloan Digital Sky Survey. The survey region covers 0.72h{sup -3} Gpc{sup 3} over 3816 square degrees and 0.16 < z < 0.47, making it the best sample yet for the study of large-scale structure. We find a well-detected peak in the correlation function at 100h{sup -1} Mpc separation that is an excellent match to the predicted shape and location of the imprint of the recombination-epoch acoustic oscillations on the low-redshift clustering of matter. This detection demonstrates the linear growth of structure by gravitational instability between z {approx} 1000 and the present and confirms a firm prediction of the standard cosmological theory. The acoustic peak provides a standard ruler by which we can measure the ratio of the distances to z = 0.35 and z = 1089 to 4% fractional accuracy and the absolute distance to z = 0.35 to 5% accuracy. From the overall shape of the correlation function, we measure the matter density {Omega}{sub m}h{sup 2} to 8% and find agreement with the value from cosmic microwave background (CMB) anisotropies. Independent of the constraints provided by the CMB acoustic scale, we find {Omega}{sub m} = 0.273 {+-} 0.025 + 0.123(1 + w{sub 0}) + 0.137{Omega}{sub K}. Including the CMB acoustic scale, we find that the spatial curvature is {Omega}{sub K} = -0.010 {+-} 0.009 if the dark energy is a cosmological constant. More generally, our results provide a measurement of cosmological distance, and hence an argument for dark energy, based on a geometric method with the same simple physics as the microwave background anisotropies. The standard cosmological model convincingly passes these new and robust tests of its fundamental properties.

  4. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation

    PubMed Central

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  5. A large-scale study of the world wide web: network correlation functions with scale-invariant boundaries

    NASA Astrophysics Data System (ADS)

    Ludueña, Guillermo A.; Meixner, Harald; Kaczor, Gregor; Gros, Claudius

    2013-08-01

    We performed a large-scale crawl of the world wide web, covering 6.9 million domains and 57 million subdomains, including all high-traffic sites of the internet. We present a study of the correlations found between quantities measuring the structural relevance of each node in the network (the in- and out-degree, the local clustering coefficient, the first-neighbor in-degree and the Alexa rank). We find that some of these properties show strong correlation effects and that the dependencies occurring out of these correlations follow power laws not only for the averages, but also for the boundaries of the respective density distributions. In addition, these scale-free limits do not follow the same exponents as the corresponding averages. In our study we retain the directionality of the hyperlinks and develop a statistical estimate for the clustering coefficient of directed graphs. We include in our study the correlations between the in-degree and the Alexa traffic rank, a popular index for the traffic volume, finding non-trivial power-law correlations. We find that sites with more/less than about 103 links from different domains have remarkably different statistical properties, for all correlation functions studied, indicating towards an underlying hierarchical structure of the world wide web.

  6. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  7. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  8. Ongoing dynamics in large-scale functional connectivity predict perception

    PubMed Central

    Sadaghiani, Sepideh; Poline, Jean-Baptiste; Kleinschmidt, Andreas; D’Esposito, Mark

    2015-01-01

    Most brain activity occurs in an ongoing manner not directly locked to external events or stimuli. Regional ongoing activity fluctuates in unison with some brain regions but not others, and the degree of long-range coupling is called functional connectivity, often measured with correlation. Strength and spatial distributions of functional connectivity dynamically change in an ongoing manner over seconds to minutes, even when the external environment is held constant. Direct evidence for any behavioral relevance of these continuous large-scale dynamics has been limited. Here, we investigated whether ongoing changes in baseline functional connectivity correlate with perception. In a continuous auditory detection task, participants perceived the target sound in roughly one-half of the trials. Very long (22–40 s) interstimulus intervals permitted investigation of baseline connectivity unaffected by preceding evoked responses. Using multivariate classification, we observed that functional connectivity before the target predicted whether it was heard or missed. Using graph theoretical measures, we characterized the difference in functional connectivity between states that lead to hits vs. misses. Before misses compared with hits and task-free rest, connectivity showed reduced modularity, a measure of integrity of modular network structure. This effect was strongest in the default mode and visual networks and caused by both reduced within-network connectivity and enhanced across-network connections before misses. The relation of behavior to prestimulus connectivity was dissociable from that of prestimulus activity amplitudes. In conclusion, moment to moment dynamic changes in baseline functional connectivity may shape subsequent behavioral performance. A highly modular network structure seems beneficial to perceptual efficiency. PMID:26106164

  9. Ongoing dynamics in large-scale functional connectivity predict perception.

    PubMed

    Sadaghiani, Sepideh; Poline, Jean-Baptiste; Kleinschmidt, Andreas; D'Esposito, Mark

    2015-07-01

    Most brain activity occurs in an ongoing manner not directly locked to external events or stimuli. Regional ongoing activity fluctuates in unison with some brain regions but not others, and the degree of long-range coupling is called functional connectivity, often measured with correlation. Strength and spatial distributions of functional connectivity dynamically change in an ongoing manner over seconds to minutes, even when the external environment is held constant. Direct evidence for any behavioral relevance of these continuous large-scale dynamics has been limited. Here, we investigated whether ongoing changes in baseline functional connectivity correlate with perception. In a continuous auditory detection task, participants perceived the target sound in roughly one-half of the trials. Very long (22-40 s) interstimulus intervals permitted investigation of baseline connectivity unaffected by preceding evoked responses. Using multivariate classification, we observed that functional connectivity before the target predicted whether it was heard or missed. Using graph theoretical measures, we characterized the difference in functional connectivity between states that lead to hits vs. misses. Before misses compared with hits and task-free rest, connectivity showed reduced modularity, a measure of integrity of modular network structure. This effect was strongest in the default mode and visual networks and caused by both reduced within-network connectivity and enhanced across-network connections before misses. The relation of behavior to prestimulus connectivity was dissociable from that of prestimulus activity amplitudes. In conclusion, moment to moment dynamic changes in baseline functional connectivity may shape subsequent behavioral performance. A highly modular network structure seems beneficial to perceptual efficiency. PMID:26106164

  10. Ongoing dynamics in large-scale functional connectivity predict perception.

    PubMed

    Sadaghiani, Sepideh; Poline, Jean-Baptiste; Kleinschmidt, Andreas; D'Esposito, Mark

    2015-07-01

    Most brain activity occurs in an ongoing manner not directly locked to external events or stimuli. Regional ongoing activity fluctuates in unison with some brain regions but not others, and the degree of long-range coupling is called functional connectivity, often measured with correlation. Strength and spatial distributions of functional connectivity dynamically change in an ongoing manner over seconds to minutes, even when the external environment is held constant. Direct evidence for any behavioral relevance of these continuous large-scale dynamics has been limited. Here, we investigated whether ongoing changes in baseline functional connectivity correlate with perception. In a continuous auditory detection task, participants perceived the target sound in roughly one-half of the trials. Very long (22-40 s) interstimulus intervals permitted investigation of baseline connectivity unaffected by preceding evoked responses. Using multivariate classification, we observed that functional connectivity before the target predicted whether it was heard or missed. Using graph theoretical measures, we characterized the difference in functional connectivity between states that lead to hits vs. misses. Before misses compared with hits and task-free rest, connectivity showed reduced modularity, a measure of integrity of modular network structure. This effect was strongest in the default mode and visual networks and caused by both reduced within-network connectivity and enhanced across-network connections before misses. The relation of behavior to prestimulus connectivity was dissociable from that of prestimulus activity amplitudes. In conclusion, moment to moment dynamic changes in baseline functional connectivity may shape subsequent behavioral performance. A highly modular network structure seems beneficial to perceptual efficiency.

  11. Large-scale data analysis using the Wigner function

    NASA Astrophysics Data System (ADS)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  12. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE PAGES

    Dodge, D. A.; Harris, D. B.

    2016-03-15

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  13. Applications of large-scale density functional theory in biology.

    PubMed

    Cole, Daniel J; Hine, Nicholas D M

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality. PMID:27494095

  14. Applications of large-scale density functional theory in biology.

    PubMed

    Cole, Daniel J; Hine, Nicholas D M

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  15. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  16. Development of large-scale functional networks over the lifespan.

    PubMed

    Schlee, Winfried; Leirer, Vera; Kolassa, Stephan; Thurm, Franka; Elbert, Thomas; Kolassa, Iris-Tatjana

    2012-10-01

    The development of large-scale functional organization of the human brain across the lifespan is not well understood. Here we used magnetoencephalographic recordings of 53 adults (ages 18-89) to characterize functional brain networks in the resting state. Slow frequencies engage larger networks than higher frequencies and show different development over the lifespan. Networks in the delta (2-4 Hz) frequency range decrease, while networks in the beta/gamma frequency range (> 16 Hz) increase in size with advancing age. Results show that the right frontal lobe and the temporal areas in both hemispheres are important relay stations in the expanding high-frequency networks. Neuropsychological tests confirmed the tendency of cognitive decline with older age. The decrease in visual memory and visuoconstructive functions was strongly associated with the age-dependent enhancement of functional connectivity in both temporal lobes. Using functional network analysis this study elucidates important neuronal principles underlying age-related cognitive decline paving mental deterioration in senescence. PMID:22236372

  17. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  18. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A.; Gaztanaga, E.

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard {Omega} = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R{sub p} {approximately}20 h{sup {minus}1} Mpc, e.g., low-matter-density (non-zero cosmological constant) models, {open_quote}tilted{close_quote} primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q{sub J} at large scales, r {approx_gt} R{sub p}. Current observational constraints on the three-point amplitudes Q{sub 3} and S{sub 3} can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  19. Large-Scale Functional Brain Network Reorganization During Taoist Meditation.

    PubMed

    Jao, Tun; Li, Chia-Wei; Vértes, Petra E; Wu, Changwei Wesley; Achard, Sophie; Hsieh, Chao-Hsien; Liou, Chien-Hui; Chen, Jyh-Horng; Bullmore, Edward T

    2016-02-01

    Meditation induces a distinct and reversible mental state that provides insights into brain correlates of consciousness. We explored brain network changes related to meditation by graph theoretical analysis of resting-state functional magnetic resonance imaging data. Eighteen Taoist meditators with varying levels of expertise were scanned using a within-subjects counterbalanced design during resting and meditation states. State-related differences in network topology were measured globally and at the level of individual nodes and edges. Although measures of global network topology, such as small-worldness, were unchanged, meditation was characterized by an extensive and expertise-dependent reorganization of the hubs (highly connected nodes) and edges (functional connections). Areas of sensory cortex, especially the bilateral primary visual and auditory cortices, and the bilateral temporopolar areas, which had the highest degree (or connectivity) during the resting state, showed the biggest decrease during meditation. Conversely, bilateral thalamus and components of the default mode network, mainly the bilateral precuneus and posterior cingulate cortex, had low degree in the resting state but increased degree during meditation. Additionally, these changes in nodal degree were accompanied by reorganization of anatomical orientation of the edges. During meditation, long-distance longitudinal (antero-posterior) edges increased proportionally, whereas orthogonal long-distance transverse (right-left) edges connecting bilaterally homologous cortices decreased. Our findings suggest that transient changes in consciousness associated with meditation introduce convergent changes in the topological and spatial properties of brain functional networks, and the anatomical pattern of integration might be as important as the global level of integration when considering the network basis for human consciousness.

  20. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  1. Correlated motion of protein subdomains and large-scale conformational flexibility of RecA protein filament

    NASA Astrophysics Data System (ADS)

    Yu, Garmay; A, Shvetsov; D, Karelov; D, Lebedev; A, Radulescu; M, Petukhov; V, Isaev-Ivanov

    2012-02-01

    Based on X-ray crystallographic data available at Protein Data Bank, we have built molecular dynamics (MD) models of homologous recombinases RecA from E. coli and D. radiodurans. Functional form of RecA enzyme, which is known to be a long helical filament, was approximated by a trimer, simulated in periodic water box. The MD trajectories were analyzed in terms of large-scale conformational motions that could be detectable by neutron and X-ray scattering techniques. The analysis revealed that large-scale RecA monomer dynamics can be described in terms of relative motions of 7 subdomains. Motion of C-terminal domain was the major contributor to the overall dynamics of protein. Principal component analysis (PCA) of the MD trajectories in the atom coordinate space showed that rotation of C-domain is correlated with the conformational changes in the central domain and N-terminal domain, that forms the monomer-monomer interface. Thus, even though C-terminal domain is relatively far from the interface, its orientation is correlated with large-scale filament conformation. PCA of the trajectories in the main chain dihedral angle coordinate space implicates a co-existence of a several different large-scale conformations of the modeled trimer. In order to clarify the relationship of independent domain orientation with large-scale filament conformation, we have performed analysis of independent domain motion and its implications on the filament geometry.

  2. Dynamic competition between large-scale functional networks differentiates fear conditioning and extinction in humans.

    PubMed

    Marstaller, Lars; Burianová, Hana; Reutens, David C

    2016-07-01

    The high evolutionary value of learning when to respond to threats or when to inhibit previously learned associations after changing threat contingencies is reflected in dedicated networks in the animal and human brain. Recent evidence further suggests that adaptive learning may be dependent on the dynamic interaction of meta-stable functional brain networks. However, it is still unclear which functional brain networks compete with each other to facilitate associative learning and how changes in threat contingencies affect this competition. The aim of this study was to assess the dynamic competition between large-scale networks related to associative learning in the human brain by combining a repeated differential conditioning and extinction paradigm with independent component analysis of functional magnetic resonance imaging data. The results (i) identify three task-related networks involved in initial and sustained conditioning as well as extinction, and demonstrate that (ii) the two main networks that underlie sustained conditioning and extinction are anti-correlated with each other and (iii) the dynamic competition between these two networks is modulated in response to changes in associative contingencies. These findings provide novel evidence for the view that dynamic competition between large-scale functional networks differentiates fear conditioning from extinction learning in the healthy brain and suggest that dysfunctional network dynamics might contribute to learning-related neuropsychiatric disorders.

  3. Dynamic competition between large-scale functional networks differentiates fear conditioning and extinction in humans.

    PubMed

    Marstaller, Lars; Burianová, Hana; Reutens, David C

    2016-07-01

    The high evolutionary value of learning when to respond to threats or when to inhibit previously learned associations after changing threat contingencies is reflected in dedicated networks in the animal and human brain. Recent evidence further suggests that adaptive learning may be dependent on the dynamic interaction of meta-stable functional brain networks. However, it is still unclear which functional brain networks compete with each other to facilitate associative learning and how changes in threat contingencies affect this competition. The aim of this study was to assess the dynamic competition between large-scale networks related to associative learning in the human brain by combining a repeated differential conditioning and extinction paradigm with independent component analysis of functional magnetic resonance imaging data. The results (i) identify three task-related networks involved in initial and sustained conditioning as well as extinction, and demonstrate that (ii) the two main networks that underlie sustained conditioning and extinction are anti-correlated with each other and (iii) the dynamic competition between these two networks is modulated in response to changes in associative contingencies. These findings provide novel evidence for the view that dynamic competition between large-scale functional networks differentiates fear conditioning from extinction learning in the healthy brain and suggest that dysfunctional network dynamics might contribute to learning-related neuropsychiatric disorders. PMID:27079532

  4. Altered functional-structural coupling of large-scale brain networks in idiopathic generalized epilepsy.

    PubMed

    Zhang, Zhiqiang; Liao, Wei; Chen, Huafu; Mantini, Dante; Ding, Ju-Rong; Xu, Qiang; Wang, Zhengge; Yuan, Cuiping; Chen, Guanghui; Jiao, Qing; Lu, Guangming

    2011-10-01

    The human brain is a large-scale integrated network in the functional and structural domain. Graph theoretical analysis provides a novel framework for analysing such complex networks. While previous neuroimaging studies have uncovered abnormalities in several specific brain networks in patients with idiopathic generalized epilepsy characterized by tonic-clonic seizures, little is known about changes in whole-brain functional and structural connectivity networks. Regarding functional and structural connectivity, networks are intimately related and share common small-world topological features. We predict that patients with idiopathic generalized epilepsy would exhibit a decoupling between functional and structural networks. In this study, 26 patients with idiopathic generalized epilepsy characterized by tonic-clonic seizures and 26 age- and sex-matched healthy controls were recruited. Resting-state functional magnetic resonance imaging signal correlations and diffusion tensor image tractography were used to generate functional and structural connectivity networks. Graph theoretical analysis revealed that the patients lost optimal topological organization in both functional and structural connectivity networks. Moreover, the patients showed significant increases in nodal topological characteristics in several cortical and subcortical regions, including mesial frontal cortex, putamen, thalamus and amygdala relative to controls, supporting the hypothesis that regions playing important roles in the pathogenesis of epilepsy may display abnormal hub properties in network analysis. Relative to controls, patients showed further decreases in nodal topological characteristics in areas of the default mode network, such as the posterior cingulate gyrus and inferior temporal gyrus. Most importantly, the degree of coupling between functional and structural connectivity networks was decreased, and exhibited a negative correlation with epilepsy duration in patients. Our findings

  5. Covariance of cross-correlations: towards efficient measures for large-scale structure

    NASA Astrophysics Data System (ADS)

    Smith, Robert E.

    2009-12-01

    We study the covariance of the cross-power spectrum of different tracers for the large-scale structure. We develop the counts-in-cells framework for the multitracer approach, and use this to derive expressions for the full non-Gaussian covariance matrix. We show that for the usual autopower statistic, besides the off-diagonal covariance generated through gravitational mode-coupling, the discreteness of the tracers and their associated sampling distribution can generate strong off-diagonal covariance, and that this becomes the dominant source of covariance as spatial frequencies become larger than the fundamental mode of the survey volume. On comparison with the derived expressions for the cross-power covariance, we show that the off-diagonal terms can be suppressed, if one cross-correlates a high tracer-density sample with a low one. Taking the effective estimator efficiency to be proportional to the signal-to-noise ratio (S/N), we show that, to probe clustering as a function of physical properties of the sample, i.e. cluster mass or galaxy luminosity, the cross-power approach can outperform the autopower one by factors of a few. We confront the theory with measurements of the mass-mass, halo-mass and halo-halo power spectra from a large ensemble of N-body simulations. We show that there is a significant S/N advantage to be gained from using the cross-power approach when studying the bias of rare haloes. The analysis is repeated in configuration space and again S/N improvement is found. We estimate the covariance matrix for these samples, and find strong off-diagonal contributions. The covariance depends on halo mass, with higher mass samples having stronger covariance. In agreement with theory, we show that the covariance is suppressed for the cross-power. This work points the way towards improved estimators for studying the clustering of tracers as a function of their physical properties.

  6. Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.

    SciTech Connect

    Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.

    2008-06-01

    The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.

  7. Correlated z-values and the accuracy of large-scale statistical estimates.

    PubMed

    Efron, Bradley

    2010-09-01

    We consider large-scale studies in which there are hundreds or thousands of correlated cases to investigate, each represented by its own normal variate, typically a z-value. A familiar example is provided by a microarray experiment comparing healthy with sick subjects' expression levels for thousands of genes. This paper concerns the accuracy of summary statistics for the collection of normal variates, such as their empirical cdf or a false discovery rate statistic. It seems like we must estimate an N by N correlation matrix, N the number of cases, but our main result shows that this is not necessary: good accuracy approximations can be based on the root mean square correlation over all N · (N - 1)/2 pairs, a quantity often easily estimated. A second result shows that z-values closely follow normal distributions even under non-null conditions, supporting application of the main theorem. Practical application of the theory is illustrated for a large leukemia microarray study. PMID:21052523

  8. Correlated z-values and the accuracy of large-scale statistical estimates

    PubMed Central

    Efron, Bradley

    2009-01-01

    We consider large-scale studies in which there are hundreds or thousands of correlated cases to investigate, each represented by its own normal variate, typically a z-value. A familiar example is provided by a microarray experiment comparing healthy with sick subjects' expression levels for thousands of genes. This paper concerns the accuracy of summary statistics for the collection of normal variates, such as their empirical cdf or a false discovery rate statistic. It seems like we must estimate an N by N correlation matrix, N the number of cases, but our main result shows that this is not necessary: good accuracy approximations can be based on the root mean square correlation over all N · (N − 1)/2 pairs, a quantity often easily estimated. A second result shows that z-values closely follow normal distributions even under non-null conditions, supporting application of the main theorem. Practical application of the theory is illustrated for a large leukemia microarray study. PMID:21052523

  9. A Bayesian Estimate of the CMB-Large-scale Structure Cross-correlation

    NASA Astrophysics Data System (ADS)

    Moura-Santos, E.; Carvalho, F. C.; Penna-Lima, M.; Novaes, C. P.; Wuensche, C. A.

    2016-08-01

    Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs-Wolfe (ISW) effect, i.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB-LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe (WMAP9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.

  10. Explorative Function in Williams Syndrome Analyzed through a Large-Scale Task with Multiple Rewards

    ERIC Educational Resources Information Center

    Foti, F.; Petrosini, L.; Cutuli, D.; Menghini, D.; Chiarotti, F.; Vicari, S.; Mandolesi, L.

    2011-01-01

    This study aimed to evaluate spatial function in subjects with Williams syndrome (WS) by using a large-scale task with multiple rewards and comparing the spatial abilities of WS subjects with those of mental age-matched control children. In the present spatial task, WS participants had to explore an open space to search nine rewards placed in…

  11. The Matching Criterion Purification for Differential Item Functioning Analyses in a Large-Scale Assessment

    ERIC Educational Resources Information Center

    Lee, HyeSun; Geisinger, Kurt F.

    2016-01-01

    The current study investigated the impact of matching criterion purification on the accuracy of differential item functioning (DIF) detection in large-scale assessments. The three matching approaches for DIF analyses (block-level matching, pooled booklet matching, and equated pooled booklet matching) were employed with the Mantel-Haenszel…

  12. Using correlations between cosmic microwave background lensing and large-scale structure to measure primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Giannantonio, Tommaso; Percival, Will J.

    2014-06-01

    We apply a new method to measure primordial non-Gaussianity, using the cross-correlation between galaxy surveys and the cosmic microwave background (CMB) lensing signal to measure galaxy bias on very large scales, where local-type primordial non-Gaussianity predicts a k2 divergence. We use the CMB lensing map recently published by the Planck Collaboration, and measure its external correlations with a suite of six galaxy catalogues spanning a broad redshift range. We then consistently combine correlation functions to extend the recent analysis by Giannantonio et al., where the density-density and the density-CMB temperature correlations were used. Due to the intrinsic noise of the Planck lensing map, which affects the largest scales most severely, we find that the constraints on the galaxy bias are similar to the constraints from density-CMB temperature correlations. Including lensing constraints only improves the previous statistical measurement errors marginally, and we obtain fNL = 12 ± 21 (1σ) from the combined data set. However, the lensing measurements serve as an excellent test of systematic errors: we now have three methods to measure the large-scale, scale-dependent bias from a galaxy survey: auto-correlation, and cross-correlation with both CMB temperature and lensing. As the publicly available Planck lensing maps have had their largest scale modes at multipoles l < 10 removed, which are the most sensitive to the scale-dependent bias, we consider mock CMB lensing data covering all multipoles. We find that, while the effect of fNL indeed increases significantly on the largest scales, so do the contributions of both cosmic variance and the intrinsic lensing noise, so that the improvement is small.

  13. The large-scale quasar-Lyman α forest cross-correlation from BOSS

    SciTech Connect

    Font-Ribera, Andreu; Arnau, Eduard; Miralda-Escudé, Jordi E-mail: edu.arnau.lazaro@gmail.com; and others

    2013-05-01

    We measure the large-scale cross-correlation of quasars with the Lyα forest absorption in redshift space, using ∼ 60000 quasar spectra from Data Release 9 (DR9) of the Baryon Oscillation Spectroscopic Survey (BOSS). The cross-correlation is detected over a wide range of scales, up to comoving separations r of 80 h{sup −1}Mpc. For r > 15 h{sup −1}Mpc, we show that the cross-correlation is well fitted by the linear theory prediction for the mean overdensity around a quasar host halo in the standard ΛCDM model, with the redshift distortions indicative of gravitational evolution detected at high confidence. Using previous determinations of the Lyα forest bias factor obtained from the Lyα autocorrelation, we infer the quasar bias factor to be b{sub q} = 3.64{sup +0.13}{sub −0.15} at a mean redshift z = 2.38, in agreement with previous measurements from the quasar auto-correlation. We also obtain a new estimate of the Lyα forest redshift distortion factor, β{sub F} = 1.1±0.15, slightly larger than but consistent with the previous measurement from the Lyα forest autocorrelation. The simple linear model we use fails at separations r < 15h{sup −1}Mpc, and we show that this may reasonably be due to the enhanced ionization due to radiation from the quasars. We also provide the expected correction that the mass overdensity around the quasar implies for measurements of the ionizing radiation background from the line-of-sight proximity effect.

  14. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    PubMed

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism. PMID:26662457

  15. Predicting protein functions from redundancies in large-scale protein interaction networks

    NASA Technical Reports Server (NTRS)

    Samanta, Manoj Pratim; Liang, Shoudan

    2003-01-01

    Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.

  16. Abnormalities in large scale functional networks in unmedicated patients with schizophrenia and effects of risperidone

    PubMed Central

    Kraguljac, Nina Vanessa; White, David Matthew; Hadley, Jennifer Ann; Visscher, Kristina; Knight, David; ver Hoef, Lawrence; Falola, Blessing; Lahti, Adrienne Carol

    2015-01-01

    Objective To describe abnormalities in large scale functional networks in unmedicated patients with schizophrenia and to examine effects of risperidone on networks. Material and methods 34 unmedicated patients with schizophrenia and 34 matched healthy controls were enrolled in this longitudinal study. We collected resting state functional MRI data with a 3T scanner at baseline and six weeks after they were started on risperidone. In addition, a group of 19 healthy controls were scanned twice six weeks apart. Four large scale networks, the dorsal attention network, executive control network, salience network, and default mode network were identified with seed based functional connectivity analyses. Group differences in connectivity, as well as changes in connectivity over time, were assessed on the group's participant level functional connectivity maps. Results In unmedicated patients with schizophrenia we found resting state connectivity to be increased in the dorsal attention network, executive control network, and salience network relative to control participants, but not the default mode network. Dysconnectivity was attenuated after six weeks of treatment only in the dorsal attention network. Baseline connectivity in this network was also related to clinical response at six weeks of treatment with risperidone. Conclusions Our results demonstrate abnormalities in large scale functional networks in patients with schizophrenia that are modulated by risperidone only to a certain extent, underscoring the dire need for development of novel antipsychotic medications that have the ability to alleviate symptoms through attenuation of dysconnectivity. PMID:26793436

  17. Optical correlator using very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators

    NASA Technical Reports Server (NTRS)

    Turner, Richard M.; Jared, David A.; Sharp, Gary D.; Johnson, Kristina M.

    1993-01-01

    The use of 2-kHz 64 x 64 very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators as the input and filter planes of a VanderLugt-type optical correlator is discussed. Liquid-crystal layer thickness variations that are present in the devices are analyzed, and the effects on correlator performance are investigated through computer simulations. Experimental results from the very-large-scale-integrated / ferroelectric-liquid-crystal optical-correlator system are presented and are consistent with the level of performance predicted by the simulations.

  18. Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase

    NASA Astrophysics Data System (ADS)

    Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele

    2015-02-01

    Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins.

  19. GPCR Network: a large-scale collaboration on GPCR structure and function

    PubMed Central

    Stevens, Raymond C.; Cherezov, Vadim; Katritch, Vsevolod; Abagyan, Ruben; Kuhn, Peter; Rosen, Hugh; Wüthrich, Kurt

    2013-01-01

    Preface Collaboration is a cornerstone of many successful scientific research endeavors, which distributes risks and rewards to encourage progress in challenging areas. A striking illustration is the large-scale project that seeks to achieve a robust fundamental understanding of structure and function in the human G protein-coupled receptor superfamily. The GPCR Network was created to achieve this goal based on an active outreach program addressing an interdisciplinary community of scientists interested in GPCR structure, chemistry and biology. PMID:23237917

  20. Searching for a correlation between cosmic-ray sources above 10{sup 19} eV and large scale structure

    SciTech Connect

    Kashti, Tamar; Waxman, Eli E-mail: eli.waxman@weizmann.ac.il

    2008-05-15

    We study the anisotropy signature which is expected if the sources of ultrahigh energy, >10{sup 19} eV, cosmic rays (UHECRs) are extra-galactic and trace the large scale distribution of luminous matter. Using the PSCz galaxy catalog as a tracer of the large scale structure (LSS), we derive the expected all sky angular distribution of the UHECR intensity. We define a statistic that measures the correlation between the predicted and observed UHECR arrival direction distributions, and show that it is more sensitive to the expected anisotropy signature than the power spectrum and the two-point correlation function. The distribution of the correlation statistic is not sensitive to the unknown redshift evolution of UHECR source density and to the unknown strength and structure of inter-galactic magnetic fields. We show, using this statistic, that recently published >5.7 Multiplication-Sign 10{sup 19} eV Auger data are inconsistent with isotropy at Asymptotically-Equal-To 98% CL, and consistent with a source distribution that traces LSS, with some preference for a source distribution that is biased with respect to the galaxy distribution. The anisotropy signature should be detectable also at lower energy, >4 Multiplication-Sign 10{sup 19} eV. A few-fold increase of the Auger exposure is likely to increase the significance to >99% CL, but not to>99.9% CL (unless the UHECR source density is comparable to or larger than that of galaxies). In order to distinguish between different bias models, the systematic uncertainty in the absolute energy calibration of the experiments should be reduced to well below the current Asymptotically-Equal-To 25%.

  1. Large-Scale Brain Networks of the Human Left Temporal Pole: A Functional Connectivity MRI Study

    PubMed Central

    Pascual, Belen; Masdeu, Joseph C.; Hollenbeck, Mark; Makris, Nikos; Insausti, Ricardo; Ding, Song-Lin; Dickerson, Bradford C.

    2015-01-01

    The most rostral portion of the human temporal cortex, the temporal pole (TP), has been described as “enigmatic” because its functional neuroanatomy remains unclear. Comparative anatomy studies are only partially helpful, because the human TP is larger and cytoarchitectonically more complex than in nonhuman primates. Considered by Brodmann as a single area (BA 38), the human TP has been recently parceled into an array of cytoarchitectonic subfields. In order to clarify the functional connectivity of subregions of the TP, we undertook a study of 172 healthy adults using resting-state functional connectivity MRI. Remarkably, a hierarchical cluster analysis performed to group the seeds into distinct subsystems according to their large-scale functional connectivity grouped 87.5% of the seeds according to the recently described cytoarchitectonic subregions of the TP. Based on large-scale functional connectivity, there appear to be 4 major subregions of the TP: 1) dorsal, with predominant connectivity to auditory/somatosensory and language networks; 2) ventromedial, predominantly connected to visual networks; 3) medial, connected to paralimbic structures; and 4) anterolateral, connected to the default-semantic network. The functional connectivity of the human TP, far more complex than its known anatomic connectivity in monkey, is concordant with its hypothesized role as a cortical convergence zone. PMID:24068551

  2. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  3. Effective Boolean dynamics analysis to identify functionally important genes in large-scale signaling networks.

    PubMed

    Trinh, Hung-Cuong; Kwon, Yung-Keun

    2015-11-01

    Efficiently identifying functionally important genes in order to understand the minimal requirements of normal cellular development is challenging. To this end, a variety of structural measures have been proposed and their effectiveness has been investigated in recent literature; however, few studies have shown the effectiveness of dynamics-based measures. This led us to investigate a dynamic measure to identify functionally important genes, and the effectiveness of which was verified through application on two large-scale human signaling networks. We specifically consider Boolean sensitivity-based dynamics against an update-rule perturbation (BSU) as a dynamic measure. Through investigations on two large-scale human signaling networks, we found that genes with relatively high BSU values show slower evolutionary rate and higher proportions of essential genes and drug targets than other genes. Gene-ontology analysis showed clear differences between the former and latter groups of genes. Furthermore, we compare the identification accuracies of essential genes and drug targets via BSU and five well-known structural measures. Although BSU did not always show the best performance, it effectively identified the putative set of genes, which is significantly different from the results obtained via the structural measures. Most interestingly, BSU showed the highest synergy effect in identifying the functionally important genes in conjunction with other measures. Our results imply that Boolean-sensitive dynamics can be used as a measure to effectively identify functionally important genes in signaling networks.

  4. Captured metagenomics: large-scale targeting of genes based on ‘sequence capture’ reveals functional diversity in soils

    PubMed Central

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K.; Hedlund, Katarina; Ahrén, Dag

    2015-01-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. PMID:26490729

  5. Captured metagenomics: large-scale targeting of genes based on 'sequence capture' reveals functional diversity in soils.

    PubMed

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K; Hedlund, Katarina; Ahrén, Dag

    2015-12-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances.

  6. Cholinergic and serotonergic modulations differentially affect large-scale functional networks in the mouse brain.

    PubMed

    Shah, Disha; Blockx, Ines; Keliris, Georgios A; Kara, Firat; Jonckers, Elisabeth; Verhoye, Marleen; Van der Linden, Annemie

    2016-07-01

    Resting-state functional MRI (rsfMRI) is a widely implemented technique used to investigate large-scale topology in the human brain during health and disease. Studies in mice provide additional advantages, including the possibility to flexibly modulate the brain by pharmacological or genetic manipulations in combination with high-throughput functional connectivity (FC) investigations. Pharmacological modulations that target specific neurotransmitter systems, partly mimicking the effect of pathological events, could allow discriminating the effect of specific systems on functional network disruptions. The current study investigated the effect of cholinergic and serotonergic antagonists on large-scale brain networks in mice. The cholinergic system is involved in cognitive functions and is impaired in, e.g., Alzheimer's disease, while the serotonergic system is involved in emotional and introspective functions and is impaired in, e.g., Alzheimer's disease, depression and autism. Specific interest goes to the default-mode-network (DMN), which is studied extensively in humans and is affected in many neurological disorders. The results show that both cholinergic and serotonergic antagonists impaired the mouse DMN-like network similarly, except that cholinergic modulation additionally affected the retrosplenial cortex. This suggests that both neurotransmitter systems are involved in maintaining integrity of FC within the DMN-like network in mice. Cholinergic and serotonergic modulations also affected other functional networks, however, serotonergic modulation impaired the frontal and thalamus networks more extensively. In conclusion, this study demonstrates the utility of pharmacological rsfMRI in animal models to provide insights into the role of specific neurotransmitter systems on functional networks in neurological disorders. PMID:26195064

  7. Linear algebraic calculation of the Green's function for large-scale electronic structure theory

    NASA Astrophysics Data System (ADS)

    Takayama, R.; Hoshi, T.; Sogabe, T.; Zhang, S.-L.; Fujiwara, T.

    2006-04-01

    A linear algebraic method named the shifted conjugate-orthogonal conjugate-gradient method is introduced for large-scale electronic structure calculation. The method gives an iterative solver algorithm of the Green’s function and the density matrix without calculating eigenstates. The problem is reduced to independent linear equations at many energy points and the calculation is actually carried out only for a single energy point. The method is robust against the round-off error and the calculation can reach the machine accuracy. With the observation of residual vectors, the accuracy can be controlled, microscopically, independently for each element of the Green’s function, and dynamically, at each step in dynamical simulations. The method is applied to both a semiconductor and a metal.

  8. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    DOE PAGES

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less

  9. Large-Scale Density Functional Theory Transition State Searching in Enzymes.

    PubMed

    Lever, Greg; Cole, Daniel J; Lonsdale, Richard; Ranaghan, Kara E; Wales, David J; Mulholland, Adrian J; Skylaris, Chris-Kriton; Payne, Mike C

    2014-11-01

    Linear-scaling quantum mechanical density functional theory calculations have been applied to study the rearrangement of chorismate to prephenate in large-scale models of the Bacillus subtilis chorismate mutase enzyme. By treating up to 2000 atoms at a consistent quantum mechanical level of theory, we obtain an unbiased, almost parameter-free description of the transition state geometry and energetics. The activation energy barrier is calculated to be lowered by 10.5 kcal mol(-1) in the enzyme, compared with the equivalent reaction in water, which is in good agreement with experiment. Natural bond orbital analysis identifies a number of active site residues that are important for transition state stabilization in chorismate mutase. This benchmark study demonstrates that linear-scaling density functional theory techniques are capable of simulating entire enzymes at the ab initio quantum mechanical level of accuracy.

  10. GENASIS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-11-01

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes compose the Basics division of our developing astrophysics simulation code GENASIS (General Astrophysical Simulation System), but their fundamental nature makes them useful for physics simulations in many fields.

  11. GENASIS   Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-11-01

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes compose the Basics division of our developing astrophysics simulation code GENASIS  (General Astrophysical Simulation System), but their fundamental nature makes them useful for physics simulations in many fields.

  12. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    SciTech Connect

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes them useful for physics simulations in many fields.

  13. PupaSuite: finding functional single nucleotide polymorphisms for large-scale genotyping purposes.

    PubMed

    Conde, Lucía; Vaquerizas, Juan M; Dopazo, Hernán; Arbiza, Leonardo; Reumers, Joke; Rousseau, Frederic; Schymkowitz, Joost; Dopazo, Joaquín

    2006-07-01

    We have developed a web tool, PupaSuite, for the selection of single nucleotide polymorphisms (SNPs) with potential phenotypic effect, specifically oriented to help in the design of large-scale genotyping projects. PupaSuite uses a collection of data on SNPs from heterogeneous sources and a large number of pre-calculated predictions to offer a flexible and intuitive interface for selecting an optimal set of SNPs. It improves the functionality of PupaSNP and PupasView programs and implements new facilities such as the analysis of user's data to derive haplotypes with functional information. A new estimator of putative effect of polymorphisms has been included that uses evolutionary information. Also SNPeffect database predictions have been included. The PupaSuite web interface is accessible through http://pupasuite.bioinfo.cipf.es and through http://www.pupasnp.org.

  14. Gravity at the horizon: on relativistic effects, CMB-LSS correlations and ultra-large scales in Horndeski's theory

    NASA Astrophysics Data System (ADS)

    Renk, Janina; Zumalacárregui, Miguel; Montanari, Francesco

    2016-07-01

    We address the impact of consistent modifications of gravity on the largest observable scales, focusing on relativistic effects in galaxy number counts and the cross-correlation between the matter large scale structure (LSS) distribution and the cosmic microwave background (CMB). Our analysis applies to a very broad class of general scalar-tensor theories encoded in the Horndeski Lagrangian and is fully consistent on linear scales, retaining the full dynamics of the scalar field and not assuming quasi-static evolution. As particular examples we consider self-accelerating Covariant Galileons, Brans-Dicke theory and parameterizations based on the effective field theory of dark energy, using the hi class code to address the impact of these models on relativistic corrections to LSS observables. We find that especially effects which involve integrals along the line of sight (lensing convergence, time delay and the integrated Sachs-Wolfe effect—ISW) can be considerably modified, and even lead to O(1000%) deviations from General Relativity in the case of the ISW effect for Galileon models, for which standard probes such as the growth function only vary by O(10%). These effects become dominant when correlating galaxy number counts at different redshifts and can lead to ~ 50% deviations in the total signal that might be observable by future LSS surveys. Because of their integrated nature, these deep-redshift cross-correlations are sensitive to modifications of gravity even when probing eras much before dark energy domination. We further isolate the ISW effect using the cross-correlation between LSS and CMB temperature anisotropies and use current data to further constrain Horndeski models. Forthcoming large-volume galaxy surveys using multiple-tracers will search for all these effects, opening a new window to probe gravity and cosmic acceleration at the largest scales available in our universe.

  15. Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos

    NASA Technical Reports Server (NTRS)

    Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.

    1994-01-01

    Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.

  16. Correlations at large scales and the onset of turbulence in the fast solar wind

    SciTech Connect

    Wicks, R. T.; Roberts, D. A.; Mallet, A.; Schekochihin, A. A.; Horbury, T. S.; Chen, C. H. K.

    2013-12-01

    We show that the scaling of structure functions of magnetic and velocity fields in a mostly highly Alfvénic fast solar wind stream depends strongly on the joint distribution of the dimensionless measures of cross helicity and residual energy. Already at very low frequencies, fluctuations that are both more balanced (cross helicity ∼0) and equipartitioned (residual energy ∼0) have steep structure functions reminiscent of 'turbulent' scalings usually associated with the inertial range. Fluctuations that are magnetically dominated (residual energy ∼–1), and so have closely anti-aligned Elsasser-field vectors, or are imbalanced (cross helicity ∼1), and so have closely aligned magnetic and velocity vectors, have wide '1/f' ranges typical of fast solar wind. We conclude that the strength of nonlinear interactions of individual fluctuations within a stream, diagnosed by the degree of correlation in direction and magnitude of magnetic and velocity fluctuations, determines the extent of the 1/f region observed, and thus the onset scale for the turbulent cascade.

  17. ERRATUM: Correlations at Large Scales and the Onset of Turbulence in the Fast Solar Wind

    NASA Technical Reports Server (NTRS)

    Wicks, R. T.; Roberts, D. A.; Mallet, A.; Schekochihin, A. A.; Horbury, T. S.; Chen, C. H. K.

    2014-01-01

    We show that the scaling of structure functions of magnetic and velocity fields in a mostly highly Alfvenic fast solar wind stream depends strongly on the joint distribution of the dimensionless measures of cross helicity and residual energy. Already at very low frequencies, fluctuations that are both more balanced (cross helicity approx. 0) and equipartitioned (residual energy approx.0) have steep structure functions reminiscent of "turbulent" scalings usually associated with the inertial range. Fluctuations that are magnetically dominated (residual energy approx. –1), and so have closely anti-aligned Elsasser-field vectors, or are imbalanced (cross helicity approx. 1), and so have closely aligned magnetic and velocity vectors, have wide "1/f" ranges typical of fast solar wind. We conclude that the strength of nonlinear interactions of individual fluctuations within a stream, diagnosed by the degree of correlation in direction and magnitude of magnetic and velocity fluctuations, determines the extent of the 1/f region observed, and thus the onset scale for the turbulent cascade.

  18. Large-scale All-electron Density Functional Theory Calculations using Enriched Finite Element Method

    NASA Astrophysics Data System (ADS)

    Kanungo, Bikash; Gavini, Vikram

    We present a computationally efficient method to perform large-scale all-electron density functional theory calculations by enriching the Lagrange polynomial basis in classical finite element (FE) discretization with atom-centered numerical basis functions, which are obtained from the solutions of the Kohn-Sham (KS) problem for single atoms. We term these atom-centered numerical basis functions as enrichment functions. The integrals involved in the construction of the discrete KS Hamiltonian and overlap matrix are computed using an adaptive quadrature grid based on gradients in the enrichment functions. Further, we propose an efficient scheme to invert the overlap matrix by exploiting its LDL factorization and employing spectral finite elements along with Gauss-Lobatto quadrature rules. Finally, we use a Chebyshev polynomial based acceleration technique to compute the occupied eigenspace in each self-consistent iteration. We demonstrate the accuracy, efficiency and scalability of the proposed method on various metallic and insulating benchmark systems, with systems ranging in the order of 10,000 electrons. We observe a 50-100 fold reduction in the overall computational time when compared to classical FE calculations while being commensurate with the desired chemical accuracy. We acknowledge the support of NSF (Grant No. 1053145) and ARO (Grant No. W911NF-15-1-0158) in conducting this work.

  19. Understanding structural-functional relationships in the human brain: a large-scale network perspective.

    PubMed

    Wang, Zhijiang; Dai, Zhengjia; Gong, Gaolang; Zhou, Changsong; He, Yong

    2015-06-01

    Relating the brain's structural connectivity (SC) to its functional connectivity (FC) is a fundamental goal in neuroscience because it is capable of aiding our understanding of how the relatively fixed SC architecture underlies human cognition and diverse behaviors. With the aid of current noninvasive imaging technologies (e.g., structural MRI, diffusion MRI, and functional MRI) and graph theory methods, researchers have modeled the human brain as a complex network of interacting neuronal elements and characterized the underlying structural and functional connectivity patterns that support diverse cognitive functions. Specifically, research has demonstrated a tight SC-FC coupling, not only in interregional connectivity strength but also in network topologic organizations, such as community, rich-club, and motifs. Moreover, this SC-FC coupling exhibits significant changes in normal development and neuropsychiatric disorders, such as schizophrenia and epilepsy. This review summarizes recent progress regarding the SC-FC relationship of the human brain and emphasizes the important role of large-scale brain networks in the understanding of structural-functional associations. Future research directions related to this topic are also proposed.

  20. Dissociated large-scale functional connectivity networks of the precuneus in medication-naïve first-episode depression.

    PubMed

    Peng, Daihui; Liddle, Elizabeth B; Iwabuchi, Sarina J; Zhang, Chen; Wu, Zhiguo; Liu, Jun; Jiang, Kaida; Xu, Lin; Liddle, Peter F; Palaniyappan, Lena; Fang, Yiru

    2015-06-30

    An imbalance in neural activity within large-scale networks appears to be an important pathophysiological aspect of depression. Yet, there is little consensus regarding the abnormality within the default mode network (DMN) in major depressive disorder (MDD). In the present study, 16 first-episode, medication-naïve patients with MDD and 16 matched healthy controls underwent functional magnetic resonance imaging (fMRI) at rest. With the precuneus (a central node of the DMN) as a seed region, functional connectivity (FC) was measured across the entire brain. The association between the FC of the precuneus and overall symptom severity was assessed using the Hamilton Depression Rating Scale. Patients with MDD exhibited a more negative relationship between the precuneus and the non-DMN regions, including the sensory processing regions (fusiform gyrus, postcentral gyrus) and the secondary motor cortex (supplementary motor area and precentral gyrus). Moreover, greater severity of depression was associated with greater anti-correlation between the precuneus and the temporo-parietal junction as well as stronger positive connectivity between the precuneus and the dorsomedial prefrontal cortex. These results indicate that dissociated large-scale networks of the precuneus may contribute to the clinical expression of depression in MDD. PMID:25957017

  1. FuncTree: Functional Analysis and Visualization for Large-Scale Omics Data.

    PubMed

    Uchiyama, Takeru; Irie, Mitsuru; Mori, Hiroshi; Kurokawa, Ken; Yamada, Takuji

    2015-01-01

    Exponential growth of high-throughput data and the increasing complexity of omics information have been making processing and interpreting biological data an extremely difficult and daunting task. Here we developed FuncTree (http://bioviz.tokyo/functree), a web-based application for analyzing and visualizing large-scale omics data, including but not limited to genomic, metagenomic, and transcriptomic data. FuncTree allows user to map their omics data onto the "Functional Tree map", a predefined circular dendrogram, which represents the hierarchical relationship of all known biological functions defined in the KEGG database. This novel visualization method allows user to overview the broad functionality of their data, thus allowing a more accurate and comprehensive understanding of the omics information. FuncTree provides extensive customization and calculation methods to not only allow user to directly map their omics data to identify the functionality of their data, but also to compute statistically enriched functions by comparing it to other predefined omics data. We have validated FuncTree's analysis and visualization capability by mapping pan-genomic data of three different types of bacterial genera, metagenomic data of the human gut, and transcriptomic data of two different types of human cell expression. All three mapping strongly confirms FuncTree's capability to analyze and visually represent key functional feature of the omics data. We believe that FuncTree's capability to conduct various functional calculations and visualizing the result into a holistic overview of biological function, would make it an integral analysis/visualization tool for extensive omics base research. PMID:25974630

  2. FuncTree: Functional Analysis and Visualization for Large-Scale Omics Data

    PubMed Central

    Uchiyama, Takeru; Irie, Mitsuru; Mori, Hiroshi; Kurokawa, Ken; Yamada, Takuji

    2015-01-01

    Exponential growth of high-throughput data and the increasing complexity of omics information have been making processing and interpreting biological data an extremely difficult and daunting task. Here we developed FuncTree (http://bioviz.tokyo/functree), a web-based application for analyzing and visualizing large-scale omics data, including but not limited to genomic, metagenomic, and transcriptomic data. FuncTree allows user to map their omics data onto the “Functional Tree map”, a predefined circular dendrogram, which represents the hierarchical relationship of all known biological functions defined in the KEGG database. This novel visualization method allows user to overview the broad functionality of their data, thus allowing a more accurate and comprehensive understanding of the omics information. FuncTree provides extensive customization and calculation methods to not only allow user to directly map their omics data to identify the functionality of their data, but also to compute statistically enriched functions by comparing it to other predefined omics data. We have validated FuncTree’s analysis and visualization capability by mapping pan-genomic data of three different types of bacterial genera, metagenomic data of the human gut, and transcriptomic data of two different types of human cell expression. All three mapping strongly confirms FuncTree’s capability to analyze and visually represent key functional feature of the omics data. We believe that FuncTree’s capability to conduct various functional calculations and visualizing the result into a holistic overview of biological function, would make it an integral analysis/visualization tool for extensive omics base research. PMID:25974630

  3. Large-scale Granger causality analysis on resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel

    2016-03-01

    We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.

  4. DGDFT: A massively parallel method for large scale density functional theory calculations

    SciTech Connect

    Hu, Wei Yang, Chao; Lin, Lin

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  5. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe.

  6. Multifractal Detrended Cross-Correlation Analysis for Large-Scale Warehouse-Out Behaviors

    NASA Astrophysics Data System (ADS)

    Yao, Can-Zhong; Lin, Ji-Nan; Zheng, Xu-Zhou

    2015-09-01

    Based on cross-correlation algorithm, we analyze the correlation property of warehouse-out quantity of different warehouses, respectively, and different products of each warehouse. Our study identifies that significant cross-correlation relationship for warehouse-out quantity exists among different warehouses and different products of a warehouse. Further, we take multifractal detrended cross-correlation analysis for warehouse-out quantity among different warehouses and different products of a warehouse. The results show that for the warehouse-out behaviors of total amount, different warehouses and different products of a warehouse significantly follow multifractal property. Specifically for each warehouse, the coupling relationships of rebar and wire rod reveal long-term memory characteristics, no matter for large fluctuation or small one. The cross-correlation effect on long-range memory property among warehouses probably has less to do with product types,and the long-term memory of YZ warehouse is greater than others especially in total amount and wire rod product. Finally, we shuffle and surrogate data to explore the source of multifractal cross-correlation property in logistics system. Taking the total amount of warehouse-out quantity as example, we confirm that the fat-tail distribution of warehouse-out quantity sequences is the main factor for multifractal cross-correlation. Through comparing the performance of the multifractal detrended cross-correlation analysis (MF-DCCA), centered multifractal detrending moving average cross-correlation analysis (MF-X-DMA) algorithms, the forward and backward MF-X-DMA algorithms, we find that the forward and backward MF-X-DMA algorithms exhibit a better performance than the other ones.

  7. Environmental correlates of large-scale spatial variation in the δ13C of marine animals

    NASA Astrophysics Data System (ADS)

    Barnes, Carolyn; Jennings, Jon. T. Barry, Simon

    2009-02-01

    Carbon stable isotopes can be used to trace the sources of energy supporting food chains and to estimate the contribution of different sources to a consumer's diet. However, the δ13C signature of a consumer is not sufficient to infer source without an appropriate isotopic baseline, because there is no way to determine if differences in consumer δ13C reflect source changes or baseline variation. Describing isotopic baselines is a considerable challenge when applying stable isotope techniques at large spatial scales and/or to interconnected food chains in open marine environments. One approach is to use filter-feeding consumers to integrate the high frequency and small-scale variation in the isotopic signature of phytoplankton and provide a surrogate baseline, but it can be difficult to sample a single consumer species at large spatial scales owing to rarity and/or discontinuous distribution. Here, we use the isotopic signature of a widely distributed filter-feeder (the queen scallop Aequipecten opercularis) in the north-eastern Atlantic to develop a model linking base δ13C to environmental variables. Remarkably, a single variable model based on bottom temperature has good predictive power and predicts scallop δ13C with mean error of only 0.6‰ (3%). When the model was used to predict an isotopic baseline in parts of the overall study region where scallop were not consistently sampled, the model accounted for 76% and 79% of the large-scale spatial variability (10 1-10 4 km) of the δ13C of two fish species (dab Limanda limanda and whiting Merlangus merlangius) and 44% of the δ13C variability in a mixed fish community. The results show that source studies would be significantly biased if a single baseline were applied to food webs at larger scales. Further, when baseline δ13C cannot be directly measured, a calculated baseline value can eliminate a large proportion of the unexplained variation in δ13C at higher trophic levels.

  8. Inferring cortical function in the mouse visual system through large-scale systems neuroscience.

    PubMed

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-07-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  9. Hybrid Multiphoton Volumetric Functional Imaging of Large Scale Bioengineered Neuronal Networks

    PubMed Central

    Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-01-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bio-engineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes/sec of structures with mm-scale dimensions containing a network of over 1000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances. PMID:24898000

  10. Large-scale functional models of visual cortex for remote sensing

    SciTech Connect

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E; Swaminarayan, Sriram; Bettencourt, Luis; Landecker, Will

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  11. Inferring cortical function in the mouse visual system through large-scale systems neuroscience.

    PubMed

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-07-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort.

  12. Inferring cortical function in the mouse visual system through large-scale systems neuroscience

    PubMed Central

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W.; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R. Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-01-01

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort. PMID:27382147

  13. Large-scale density functional theory investigation of failure modes in ZnO nanowires.

    PubMed

    Agrawal, Ravi; Paci, Jeffrey T; Espinosa, Horacio D

    2010-09-01

    Electromechanical and photonic properties of semiconducting nanowires depend on their strain states and are limited by their extent of deformation. A fundamental understanding of the mechanical response of individual nanowires is therefore essential to assess system reliability and to define the design space of future nanowire-based devices. Here we perform a large-scale density functional theory (DFT) investigation of failure modes in zinc oxide (ZnO) nanowires. Nanowires as large as 3.6 nm in diameter with 864 atoms were investigated. The study reveals that pristine nanowires can be elastically deformed to strains as high as 20%, prior to a phase transition leading to fracture. The current study suggests that the phase transition predicted at approximately 10% strain in pristine nanowires by the Buckingham pairwise potential (BP) is an artifact of approximations inherent in the BP. Instead, DFT-based energy barrier calculations suggest that defects may trigger heterogeneous phase transition leading to failure. Thus, the difference previously reported between in situ electron microscopy tensile experiments (brittle fracture) and atomistic simulations (phase transition and secondary loading) (Agrawal, R.; Peng, B.; Espinosa, H. D. Nano Lett. 2009, 9 (12), 4177-2183) is elucidated. PMID:20726573

  14. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    NASA Technical Reports Server (NTRS)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  15. Risks of large-scale use of systemic insecticides to ecosystem functioning and services.

    PubMed

    Chagnon, Madeleine; Kreutzweiser, David; Mitchell, Edward A D; Morrissey, Christy A; Noome, Dominique A; Van der Sluijs, Jeroen P

    2015-01-01

    Large-scale use of the persistent and potent neonicotinoid and fipronil insecticides has raised concerns about risks to ecosystem functions provided by a wide range of species and environments affected by these insecticides. The concept of ecosystem services is widely used in decision making in the context of valuing the service potentials, benefits, and use values that well-functioning ecosystems provide to humans and the biosphere and, as an endpoint (value to be protected), in ecological risk assessment of chemicals. Neonicotinoid insecticides are frequently detected in soil and water and are also found in air, as dust particles during sowing of crops and aerosols during spraying. These environmental media provide essential resources to support biodiversity, but are known to be threatened by long-term or repeated contamination by neonicotinoids and fipronil. We review the state of knowledge regarding the potential impacts of these insecticides on ecosystem functioning and services provided by terrestrial and aquatic ecosystems including soil and freshwater functions, fisheries, biological pest control, and pollination services. Empirical studies examining the specific impacts of neonicotinoids and fipronil to ecosystem services have focused largely on the negative impacts to beneficial insect species (honeybees) and the impact on pollination service of food crops. However, here we document broader evidence of the effects on ecosystem functions regulating soil and water quality, pest control, pollination, ecosystem resilience, and community diversity. In particular, microbes, invertebrates, and fish play critical roles as decomposers, pollinators, consumers, and predators, which collectively maintain healthy communities and ecosystem integrity. Several examples in this review demonstrate evidence of the negative impacts of systemic insecticides on decomposition, nutrient cycling, soil respiration, and invertebrate populations valued by humans. Invertebrates

  16. Risks of large-scale use of systemic insecticides to ecosystem functioning and services.

    PubMed

    Chagnon, Madeleine; Kreutzweiser, David; Mitchell, Edward A D; Morrissey, Christy A; Noome, Dominique A; Van der Sluijs, Jeroen P

    2015-01-01

    Large-scale use of the persistent and potent neonicotinoid and fipronil insecticides has raised concerns about risks to ecosystem functions provided by a wide range of species and environments affected by these insecticides. The concept of ecosystem services is widely used in decision making in the context of valuing the service potentials, benefits, and use values that well-functioning ecosystems provide to humans and the biosphere and, as an endpoint (value to be protected), in ecological risk assessment of chemicals. Neonicotinoid insecticides are frequently detected in soil and water and are also found in air, as dust particles during sowing of crops and aerosols during spraying. These environmental media provide essential resources to support biodiversity, but are known to be threatened by long-term or repeated contamination by neonicotinoids and fipronil. We review the state of knowledge regarding the potential impacts of these insecticides on ecosystem functioning and services provided by terrestrial and aquatic ecosystems including soil and freshwater functions, fisheries, biological pest control, and pollination services. Empirical studies examining the specific impacts of neonicotinoids and fipronil to ecosystem services have focused largely on the negative impacts to beneficial insect species (honeybees) and the impact on pollination service of food crops. However, here we document broader evidence of the effects on ecosystem functions regulating soil and water quality, pest control, pollination, ecosystem resilience, and community diversity. In particular, microbes, invertebrates, and fish play critical roles as decomposers, pollinators, consumers, and predators, which collectively maintain healthy communities and ecosystem integrity. Several examples in this review demonstrate evidence of the negative impacts of systemic insecticides on decomposition, nutrient cycling, soil respiration, and invertebrate populations valued by humans. Invertebrates

  17. Aging and large-scale functional networks: white matter integrity, gray matter volume, and functional connectivity in the resting state.

    PubMed

    Marstaller, L; Williams, M; Rich, A; Savage, G; Burianová, H

    2015-04-01

    Healthy aging is accompanied by neurobiological changes that affect the brain's functional organization and the individual's cognitive abilities. The aim of this study was to investigate the effect of global age-related differences in the cortical white and gray matter on neural activity in three key large-scale networks. We used functional-structural covariance network analysis to assess resting state activity in the default mode network (DMN), the fronto-parietal network (FPN), and the salience network (SN) of young and older adults. We further related this functional activity to measures of cortical thickness and volume derived from structural MRI, as well as to measures of white matter integrity (fractional anisotropy [FA], mean diffusivity [MD], and radial diffusivity [RD]) derived from diffusion-weighted imaging. First, our results show that, in the direct comparison of resting state activity, young but not older adults reliably engage the SN and FPN in addition to the DMN, suggesting that older adults recruit these networks less consistently. Second, our results demonstrate that age-related decline in white matter integrity and gray matter volume is associated with activity in prefrontal nodes of the SN and FPN, possibly reflecting compensatory mechanisms. We suggest that age-related differences in gray and white matter properties differentially affect the ability of the brain to engage and coordinate large-scale functional networks that are central to efficient cognitive functioning.

  18. Subspace accelerated inexact Newton method for large scale wave functions calculations in Density Functional Theory

    SciTech Connect

    Fattebert, J

    2008-07-29

    We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.

  19. Functional network construction in Arabidopsis using rule-based machine learning on large-scale data sets.

    PubMed

    Bassel, George W; Glaab, Enrico; Marquez, Julietta; Holdsworth, Michael J; Bacardit, Jaume

    2011-09-01

    The meta-analysis of large-scale postgenomics data sets within public databases promises to provide important novel biological knowledge. Statistical approaches including correlation analyses in coexpression studies of gene expression have emerged as tools to elucidate gene function using these data sets. Here, we present a powerful and novel alternative methodology to computationally identify functional relationships between genes from microarray data sets using rule-based machine learning. This approach, termed "coprediction," is based on the collective ability of groups of genes co-occurring within rules to accurately predict the developmental outcome of a biological system. We demonstrate the utility of coprediction as a powerful analytical tool using publicly available microarray data generated exclusively from Arabidopsis thaliana seeds to compute a functional gene interaction network, termed Seed Co-Prediction Network (SCoPNet). SCoPNet predicts functional associations between genes acting in the same developmental and signal transduction pathways irrespective of the similarity in their respective gene expression patterns. Using SCoPNet, we identified four novel regulators of seed germination (ALTERED SEED GERMINATION5, 6, 7, and 8), and predicted interactions at the level of transcript abundance between these novel and previously described factors influencing Arabidopsis seed germination. An online Web tool to query SCoPNet has been developed as a community resource to dissect seed biology and is available at http://www.vseed.nottingham.ac.uk/. PMID:21896882

  20. Low frequency steady-state brain responses modulate large scale functional networks in a frequency-specific means.

    PubMed

    Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu

    2016-01-01

    Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (<1 Hz). However, it is difficult to determine the frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities. PMID:26512872

  1. Low frequency steady-state brain responses modulate large scale functional networks in a frequency-specific means.

    PubMed

    Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu

    2016-01-01

    Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (<1 Hz). However, it is difficult to determine the frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities.

  2. REIONIZATION ON LARGE SCALES. II. DETECTING PATCHY REIONIZATION THROUGH CROSS-CORRELATION OF THE COSMIC MICROWAVE BACKGROUND

    SciTech Connect

    Natarajan, A.; Battaglia, N.; Trac, H.; Pen, U.-L.; Loeb, A.

    2013-10-20

    We investigate the effect of patchy reionization on the cosmic microwave background (CMB) temperature. An anisotropic optical depth τ( n-hat ) alters the TT power spectrum on small scales l > 2000. We make use of the correlation between the matter density and the reionization redshift fields to construct full sky maps of τ( n-hat ). Patchy reionization transfers CMB power from large scales to small scales, resulting in a non-zero cross correlation between large and small angular scales. We show that the patchy τ correlator is sensitive to small root mean square (rms) values τ{sub rms} ∼ 0.003 seen in our maps. We include frequency-independent secondaries such as CMB lensing and kinetic Sunyaev-Zel'dovich (kSZ) terms, and show that patchy τ may still be detected at high significance. Reionization models that predict different values of τ{sub rms} may be distinguished even for the same mean value (τ). It is more difficult to detect patchy τ in the presence of larger secondaries such as the thermal Sunyaev-Zel'dovich, radio background, and the cosmic infrared background. In this case, we show that patchy τ may be detected if these frequency-dependent secondaries are minimized to ∼< 5 μK (rms) by means of a multi-frequency analysis. We show that the patchy τ correlator provides information that is complementary to what may be obtained from the polarization and the kSZ power spectra.

  3. Large-scale production of functional human lysozyme from marker-free transgenic cloned cows

    PubMed Central

    Lu, Dan; Liu, Shen; Ding, Fangrong; Wang, Haiping; Li, Jing; Li, Ling; Dai, Yunping; Li, Ning

    2016-01-01

    Human lysozyme is an important natural non-specific immune protein that is highly expressed in breast milk and participates in the immune response of infants against bacterial and viral infections. Considering the medicinal value and market demand for human lysozyme, an animal model for large-scale production of recombinant human lysozyme (rhLZ) is needed. In this study, we generated transgenic cloned cows with the marker-free vector pBAC-hLF-hLZ, which was shown to efficiently express rhLZ in cow milk. Seven transgenic cloned cows, identified by polymerase chain reaction, Southern blot, and western blot analyses, produced rhLZ in milk at concentrations of up to 3149.19 ± 24.80 mg/L. The purified rhLZ had a similar molecular weight and enzymatic activity as wild-type human lysozyme possessed the same C-terminal and N-terminal amino acid sequences. The preliminary results from the milk yield and milk compositions from a naturally lactating transgenic cloned cow 0906 were also tested. These results provide a solid foundation for the large-scale production of rhLZ in the future. PMID:26961596

  4. A large-scale functional screen identifies Nova1 and Ncoa3 as regulators of neuronal miRNA function

    PubMed Central

    Störchel, Peter H; Thümmler, Juliane; Siegel, Gabriele; Aksoy-Aksel, Ayla; Zampa, Federico; Sumer, Simon; Schratt, Gerhard

    2015-01-01

    MicroRNAs (miRNAs) are important regulators of neuronal development, network connectivity, and synaptic plasticity. While many neuronal miRNAs were previously shown to modulate neuronal morphogenesis, little is known regarding the regulation of miRNA function. In a large-scale functional screen, we identified two novel regulators of neuronal miRNA function, Nova1 and Ncoa3. Both proteins are expressed in the nucleus and the cytoplasm of developing hippocampal neurons. We found that Nova1 and Ncoa3 stimulate miRNA function by different mechanisms that converge on Argonaute (Ago) proteins, core components of the miRNA-induced silencing complex (miRISC). While Nova1 physically interacts with Ago proteins, Ncoa3 selectively promotes the expression of Ago2 at the transcriptional level. We further show that Ncoa3 regulates dendritic complexity and dendritic spine maturation of hippocampal neurons in a miRNA-dependent fashion. Importantly, both the loss of miRNA activity and increased dendrite complexity upon Ncoa3 knockdown were rescued by Ago2 overexpression. Together, we uncovered two novel factors that control neuronal miRISC function at the level of Ago proteins, with possible implications for the regulation of synapse development and plasticity. PMID:26105073

  5. Large-scale Probabilistic Functional Modes from resting state fMRI

    PubMed Central

    Harrison, Samuel J.; Woolrich, Mark W.; Robinson, Emma C.; Glasser, Matthew F.; Beckmann, Christian F.; Jenkinson, Mark; Smith, Stephen M.

    2015-01-01

    It is well established that it is possible to observe spontaneous, highly structured, fluctuations in human brain activity from functional magnetic resonance imaging (fMRI) when the subject is ‘at rest’. However, characterising this activity in an interpretable manner is still a very open problem. In this paper, we introduce a method for identifying modes of coherent activity from resting state fMRI (rfMRI) data. Our model characterises a mode as the outer product of a spatial map and a time course, constrained by the nature of both the between-subject variation and the effect of the haemodynamic response function. This is presented as a probabilistic generative model within a variational framework that allows Bayesian inference, even on voxelwise rfMRI data. Furthermore, using this approach it becomes possible to infer distinct extended modes that are correlated with each other in space and time, a property which we believe is neuroscientifically desirable. We assess the performance of our model on both simulated data and high quality rfMRI data from the Human Connectome Project, and contrast its properties with those of both spatial and temporal independent component analysis (ICA). We show that our method is able to stably infer sets of modes with complex spatio-temporal interactions and spatial differences between subjects. PMID:25598050

  6. Large-scale production of functional human lysozyme in transgenic cloned goats.

    PubMed

    Yu, Huiqing; Chen, Jianquan; Liu, Siguo; Zhang, Aimin; Xu, Xujun; Wang, Xuebin; Lu, Ping; Cheng, Guoxiang

    2013-12-01

    Human lysozyme (hLZ), an essential protein against many types of microorganisms, has been expressed in transgenic livestock to improve their health status and milk quality. However, the large-scale production of hLZ in transgenic livestock is currently unavailable. Here we describe the generation of transgenic goats, by somatic cell-mediated transgenic cloning, that express large amounts of recombinant human lysozyme (rhLZ) in milk. Specifically, two optimized lysozyme expression cassettes (β-casein/hLZ and β-lactoglobulin/hLZ) were designed and introduced into goat somatic cells by cell transfection. Using transgenic cell colonies, which were screened by 0.8 mg/mL G418, as a nuclear donor, we obtained 10 transgenic cloned goats containing one copy of hLZ hybrid gene. An ELISA assay indicated that the transgenic goats secreted up to 6.2 g/L of rhLZ in their milk during the natural lactation period, which is approximately 5-10 times higher than human milk. The average rhLZ expression levels in β-casein/hLZ and β-lactoglobulin/hLZ transgenic goats were 2.3 g/L and 3.6 g/L, respectively. Therefore, both rhLZ expression cassettes could induce high levels of expression of the rhLZ in goat mammary glands. In addition, the rhLZ purified from goat milk has similar physicochemical properties as the natural human lysozyme, including the molecular mass, N-terminal sequence, lytic activity, and thermal and pH stability. An antibacterial analysis revealed that rhLZ and hLZ were equally effective in two bacterial inhibition experiments using Staphylococcus aureus and Escherichia coli. Taken together, our experiments not only underlined that the large-scale production of biologically active rhLZ in animal mammary gland is realistic, but also demonstrated that rhLZ purified from goat milk will be potentially useful in biopharmaceuticals.

  7. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  8. Decoding the Large-Scale Structure of Brain Function by Classifying Mental States Across Individuals

    PubMed Central

    Poldrack, Russell A.; Halchenko, Yaroslav; Hanson, Stephen José

    2010-01-01

    Brain-imaging research has largely focused on localizing patterns of activity related to specific mental processes, but recent work has shown that mental states can be identified from neuroimaging data using statistical classifiers. We investigated whether this approach could be extended to predict the mental state of an individual using a statistical classifier trained on other individuals, and whether the information gained in doing so could provide new insights into how mental processes are organized in the brain. Using a variety of classifier techniques, we achieved cross-validated classification accuracy of 80% across individuals (chance = 13%). Using a neural network classifier, we recovered a low-dimensional representation common to all the cognitive-perceptual tasks in our data set, and we used an ontology of cognitive processes to determine the cognitive concepts most related to each dimension. These results revealed a small organized set of large-scale networks that map cognitive processes across a highly diverse set of mental tasks, suggesting a novel way to characterize the neural basis of cognition. PMID:19883493

  9. A 3D Sphere Culture System Containing Functional Polymers for Large-Scale Human Pluripotent Stem Cell Production

    PubMed Central

    Otsuji, Tomomi G.; Bin, Jiang; Yoshimura, Azumi; Tomura, Misayo; Tateyama, Daiki; Minami, Itsunari; Yoshikawa, Yoshihiro; Aiba, Kazuhiro; Heuser, John E.; Nishino, Taito; Hasegawa, Kouichi; Nakatsuji, Norio

    2014-01-01

    Summary Utilizing human pluripotent stem cells (hPSCs) in cell-based therapy and drug discovery requires large-scale cell production. However, scaling up conventional adherent cultures presents challenges of maintaining a uniform high quality at low cost. In this regard, suspension cultures are a viable alternative, because they are scalable and do not require adhesion surfaces. 3D culture systems such as bioreactors can be exploited for large-scale production. However, the limitations of current suspension culture methods include spontaneous fusion between cell aggregates and suboptimal passaging methods by dissociation and reaggregation. 3D culture systems that dynamically stir carrier beads or cell aggregates should be refined to reduce shearing forces that damage hPSCs. Here, we report a simple 3D sphere culture system that incorporates mechanical passaging and functional polymers. This setup resolves major problems associated with suspension culture methods and dynamic stirring systems and may be optimal for applications involving large-scale hPSC production. PMID:24936458

  10. Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force

    NASA Astrophysics Data System (ADS)

    Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.

    2016-01-01

    The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.

  11. Insights into Hox protein function from a large scale combinatorial analysis of protein domains.

    PubMed

    Merabet, Samir; Litim-Mecheri, Isma; Karlsson, Daniel; Dixit, Richa; Saadaoui, Mehdi; Monier, Bruno; Brun, Christine; Thor, Stefan; Vijayraghavan, K; Perrin, Laurent; Pradel, Jacques; Graba, Yacine

    2011-10-01

    Protein function is encoded within protein sequence and protein domains. However, how protein domains cooperate within a protein to modulate overall activity and how this impacts functional diversification at the molecular and organism levels remains largely unaddressed. Focusing on three domains of the central class Drosophila Hox transcription factor AbdominalA (AbdA), we used combinatorial domain mutations and most known AbdA developmental functions as biological readouts to investigate how protein domains collectively shape protein activity. The results uncover redundancy, interactivity, and multifunctionality of protein domains as salient features underlying overall AbdA protein activity, providing means to apprehend functional diversity and accounting for the robustness of Hox-controlled developmental programs. Importantly, the results highlight context-dependency in protein domain usage and interaction, allowing major modifications in domains to be tolerated without general functional loss. The non-pleoitropic effect of domain mutation suggests that protein modification may contribute more broadly to molecular changes underlying morphological diversification during evolution, so far thought to rely largely on modification in gene cis-regulatory sequences.

  12. Evidence of a Large-Scale Functional Organization of Mammalian Chromosomes

    PubMed Central

    2005-01-01

    Evidence from inbred strains of mice indicates that a quarter or more of the mammalian genome consists of chromosome regions containing clusters of functionally related genes. The intense selection pressures during inbreeding favor the coinheritance of optimal sets of alleles among these genetically linked, functionally related genes, resulting in extensive domains of linkage disequilibrium (LD) among a set of 60 genetically diverse inbred strains. Recombination that disrupts the preferred combinations of alleles reduces the ability of offspring to survive further inbreeding. LD is also seen between markers on separate chromosomes, forming networks with scale-free architecture. Combining LD data with pathway and genome annotation databases, we have been able to identify the biological functions underlying several domains and networks. Given the strong conservation of gene order among mammals, the domains and networks we find in mice probably characterize all mammals, including humans. PMID:16163395

  13. Size-dependent species removal impairs ecosystem functioning in a large-scale tropical field experiment.

    PubMed

    Dangles, Olivier; Carpio, Carlos; Woodward, Guy

    2012-12-01

    A major challenge of ecological research is to assess the functional consequences of species richness loss over time and space in global biodiversity hotspots, where extinctions are happening at an unprecedented rate. To address this issue, greater realism needs to be incorporated into both conceptual and experimental approaches. Here we propose a conceptual model that incorporates body size as a critical aspect of community responses to environmental change, which we tested in the Western Amazonian rain forest, one of the most speciose ecosystems on the planet. We employed an exclosure removal experiment (replicated under 10 microhabitats and four climatic conditions) in which we manipulated access to two types of resource by the whole community of dung and carrion beetles (> 60 species), depending on their size. Our 400 independent measurements revealed that changes in the number of species and functional groups, and temporal patterns in community composition, all affected resource burial rates, a key ecosystem process. Further, the functional contribution of species diversity in each size class was tightly dependent on beetle abundance, and while the role of large species could be performed by abundant smaller ones, and other naturally occurring decomposers, this was not the case when environmental conditions were harsher. These results demonstrate, for the first time in an animal assemblage in a tropical ecosystem, that although species may appear functionally redundant under one set of environmental conditions, many species would be needed to maintain ecosystem functioning at multiple temporal and spatial scales. This highlights the potential fragility of these systems to the ongoing global "Sixth Great Extinction," whose effects are likely to be especially pronounced in the Tropics.

  14. Generation and Analysis of Large-Scale Data-Driven Mycobacterium tuberculosis Functional Networks for Drug Target Identification.

    PubMed

    Mazandu, Gaston K; Mulder, Nicola J

    2011-01-01

    Technological developments in large-scale biological experiments, coupled with bioinformatics tools, have opened the doors to computational approaches for the global analysis of whole genomes. This has provided the opportunity to look at genes within their context in the cell. The integration of vast amounts of data generated by these technologies provides a strategy for identifying potential drug targets within microbial pathogens, the causative agents of infectious diseases. As proteins are druggable targets, functional interaction networks between proteins are used to identify proteins essential to the survival, growth, and virulence of these microbial pathogens. Here we have integrated functional genomics data to generate functional interaction networks between Mycobacterium tuberculosis proteins and carried out computational analyses to dissect the functional interaction network produced for identifying drug targets using network topological properties. This study has provided the opportunity to expand the range of potential drug targets and to move towards optimal target-based strategies.

  15. Long-term and large-scale perspectives on the relationship between biodiversity and ecosystem functioning

    USGS Publications Warehouse

    Symstad, A.J.; Chapin, F. S.; Wall, D.H.; Gross, K.L.; Huenneke, L.F.; Mittelbach, G.G.; Peters, D.P.C.; Tilman, D.

    2003-01-01

    In a growing body of literature from a variety of ecosystems is strong evidence that various components of biodiversity have significant impacts on ecosystem functioning. However, much of this evidence comes from short-term, small-scale experiments in which communities are synthesized from relatively small species pools and conditions are highly controlled. Extrapolation of the results of such experiments to longer time scales and larger spatial scales - those of whole ecosystems - is difficult because the experiments do not incorporate natural processes such as recruitment limitation and colonization of new species. We show how long-term study of planned and accidental changes in species richness and composition suggests that the effects of biodiversity on ecosystem functioning will vary over time and space. More important, we also highlight areas of uncertainty that need to be addressed through coordinated cross-scale and cross-site research.

  16. Large-scale turnover of functional transcription factor bindingsites in Drosophila

    SciTech Connect

    Moses, Alan M.; Pollard, Daniel A.; Nix, David A.; Iyer, VenkyN.; Li, Xiao-Yong; Biggin, Mark D.; Eisen, Michael B.

    2006-07-14

    The gain and loss of functional transcription-factor bindingsites has been proposed as a major source of evolutionary change incis-regulatory DNA and gene expression. We have developed an evolutionarymodel to study binding site turnover that uses multiple sequencealignments to assess the evolutionary constraint on individual bindingsites, and to map gain and loss events along a phylogenetic tree. Weapply this model to study the evolutionary dynamics of binding sites ofthe Drosophila melanogaster transcription factor Zeste, using genome-widein vivo (ChIP-chip) binding data to identify functional Zeste bindingsites, and the genome sequences of D. melanogaster, D. simulans, D.erecta and D. yakuba to study their evolution. We estimate that more than5 percent of functional Zeste binding sites in D. melanogaster weregained along the D. melanogaster lineage or lost along one of the otherlineages. We find that Zeste bound regions have a reduced rate of bindingsite loss and an increased rate of binding site gain relative to flankingsequences. Finally, we show that binding site gains and losses areasymmetrically distributed with respect to D. melanogaster, consistentwith lineage-specific acquisition and loss of Zeste-responsive regulatoryelements.

  17. Large scale brain functional networks support sentence comprehension: evidence from both explicit and implicit language tasks.

    PubMed

    Zhu, Zude; Fan, Yuanyuan; Feng, Gangyi; Huang, Ruiwang; Wang, Suiping

    2013-01-01

    Previous studies have indicated that sentences are comprehended via widespread brain regions in the fronto-temporo-parietal network in explicit language tasks (e.g., semantic congruency judgment tasks), and through restricted temporal or frontal regions in implicit language tasks (e.g., font size judgment tasks). This discrepancy has raised questions regarding a common network for sentence comprehension that acts regardless of task effect and whether different tasks modulate network properties. To this end, we constructed brain functional networks based on 27 subjects' fMRI data that was collected while performing explicit and implicit language tasks. We found that network properties and network hubs corresponding to the implicit language task were similar to those associated with the explicit language task. We also found common hubs in occipital, temporal and frontal regions in both tasks. Compared with the implicit language task, the explicit language task resulted in greater global efficiency and increased integrated betweenness centrality of the left inferior frontal gyrus, which is a key region related to sentence comprehension. These results suggest that brain functional networks support both explicit and implicit sentence comprehension; in addition, these two types of language tasks may modulate the properties of brain functional networks.

  18. Large-scale functional analysis of the roles of phosphorylation in yeast metabolic pathways.

    PubMed

    Schulz, Juliane Caroline; Zampieri, Mattia; Wanka, Stefanie; von Mering, Christian; Sauer, Uwe

    2014-11-25

    Protein phosphorylation is a widespread posttranslational modification that regulates almost all cellular functions. To investigate the large number of phosphorylation events with unknown functions, we monitored the concentrations of several hundred intracellular metabolites in Saccharomyces cerevisiae yeast strains with deletions of 118 kinases or phosphatases. Whereas most deletion strains had no detectable difference in growth compared to wild-type yeast, two-thirds of deletion strains had alterations in metabolic profiles. For about half of the kinases and phosphatases encoded by the deleted genes, we inferred specific regulatory roles on the basis of knowledge about the affected metabolic pathways. We demonstrated that the phosphatase Ppq1 was required for metal homeostasis. Combining metabolomic data with published phosphoproteomic data in a stoichiometric model enabled us to predict functions for phosphorylation in the regulation of 47 enzymes. Overall, we provided insights and testable predictions covering greater than twice the number of known phosphorylated enzymes in yeast, suggesting extensive phosphorylation-dependent regulation of yeast metabolism.

  19. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  20. Differences in human cortical gene expression match the temporal properties of large-scale functional networks.

    PubMed

    Cioli, Claudia; Abdi, Hervé; Beaton, Derek; Burnod, Yves; Mesmoudi, Salma

    2014-01-01

    We explore the relationships between the cortex functional organization and genetic expression (as provided by the Allen Human Brain Atlas). Previous work suggests that functional cortical networks (resting state and task based) are organized as two large networks (differentiated by their preferred information processing mode) shaped like two rings. The first ring--Visual-Sensorimotor-Auditory (VSA)--comprises visual, auditory, somatosensory, and motor cortices that process real time world interactions. The second ring--Parieto-Temporo-Frontal (PTF)--comprises parietal, temporal, and frontal regions with networks dedicated to cognitive functions, emotions, biological needs, and internally driven rhythms. We found--with correspondence analysis--that the patterns of expression of the 938 genes most differentially expressed across the cortex organized the cortex into two sets of regions that match the two rings. We confirmed this result using discriminant correspondence analysis by showing that the genetic profiles of cortical regions can reliably predict to what ring these regions belong. We found that several of the proteins--coded by genes that most differentiate the rings--were involved in neuronal information processing such as ionic channels and neurotransmitter release. The systematic study of families of genes revealed specific proteins within families preferentially expressed in each ring. The results showed strong congruence between the preferential expression of subsets of genes, temporal properties of the proteins they code, and the preferred processing modes of the rings. Ionic channels and release-related proteins more expressed in the VSA ring favor temporal precision of fast evoked neural transmission (Sodium channels SCNA1, SCNB1 potassium channel KCNA1, calcium channel CACNA2D2, Synaptotagmin SYT2, Complexin CPLX1, Synaptobrevin VAMP1). Conversely, genes expressed in the PTF ring favor slower, sustained, or rhythmic activation (Sodium channels SCNA3

  1. Differences in Human Cortical Gene Expression Match the Temporal Properties of Large-Scale Functional Networks

    PubMed Central

    Cioli, Claudia; Abdi, Hervé; Beaton, Derek; Burnod, Yves; Mesmoudi, Salma

    2014-01-01

    We explore the relationships between the cortex functional organization and genetic expression (as provided by the Allen Human Brain Atlas). Previous work suggests that functional cortical networks (resting state and task based) are organized as two large networks (differentiated by their preferred information processing mode) shaped like two rings. The first ring–Visual-Sensorimotor-Auditory (VSA)–comprises visual, auditory, somatosensory, and motor cortices that process real time world interactions. The second ring–Parieto-Temporo-Frontal (PTF)–comprises parietal, temporal, and frontal regions with networks dedicated to cognitive functions, emotions, biological needs, and internally driven rhythms. We found–with correspondence analysis–that the patterns of expression of the 938 genes most differentially expressed across the cortex organized the cortex into two sets of regions that match the two rings. We confirmed this result using discriminant correspondence analysis by showing that the genetic profiles of cortical regions can reliably predict to what ring these regions belong. We found that several of the proteins–coded by genes that most differentiate the rings–were involved in neuronal information processing such as ionic channels and neurotransmitter release. The systematic study of families of genes revealed specific proteins within families preferentially expressed in each ring. The results showed strong congruence between the preferential expression of subsets of genes, temporal properties of the proteins they code, and the preferred processing modes of the rings. Ionic channels and release-related proteins more expressed in the VSA ring favor temporal precision of fast evoked neural transmission (Sodium channels SCNA1, SCNB1 potassium channel KCNA1, calcium channel CACNA2D2, Synaptotagmin SYT2, Complexin CPLX1, Synaptobrevin VAMP1). Conversely, genes expressed in the PTF ring favor slower, sustained, or rhythmic activation (Sodium

  2. Tucker-tensor algorithm for large-scale Kohn-Sham density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Motamarri, Phani; Gavini, Vikram; Blesgen, Thomas

    2016-03-01

    In this work, we propose a systematic way of computing a low-rank globally adapted localized Tucker-tensor basis for solving the Kohn-Sham density functional theory (DFT) problem. In every iteration of the self-consistent field procedure of the Kohn-Sham DFT problem, we construct an additive separable approximation of the Kohn-Sham Hamiltonian. The Tucker-tensor basis is chosen such as to span the tensor product of the one-dimensional eigenspaces corresponding to each of the spatially separable Hamiltonians, and the localized Tucker-tensor basis is constructed from localized representations of these one-dimensional eigenspaces. This Tucker-tensor basis forms a complete basis, and is naturally adapted to the Kohn-Sham Hamiltonian. Further, the locality of this basis in real-space allows us to exploit reduced-order scaling algorithms for the solution of the discrete Kohn-Sham eigenvalue problem. In particular, we use Chebyshev filtering to compute the eigenspace of the Kohn-Sham Hamiltonian, and evaluate nonorthogonal localized wave functions spanning the Chebyshev filtered space, all represented in the Tucker-tensor basis. We thereby compute the electron-density and other quantities of interest, using a Fermi-operator expansion of the Hamiltonian projected onto the subspace spanned by the nonorthogonal localized wave functions. Numerical results on benchmark examples involving pseudopotential calculations suggest an exponential convergence of the ground-state energy with the Tucker rank. Interestingly, the rank of the Tucker-tensor basis required to obtain chemical accuracy is found to be only weakly dependent on the system size, which results in close to linear-scaling complexity for Kohn-Sham DFT calculations for both insulating and metallic systems. A comparative study has revealed significant computational efficiencies afforded by the proposed Tucker-tensor approach in comparison to a plane-wave basis.

  3. Large-scale Generation of Patterned Bubble Arrays on Printed Bi-functional Boiling Surfaces.

    PubMed

    Choi, Chang-Ho; David, Michele; Gao, Zhongwei; Chang, Alvin; Allen, Marshall; Wang, Hailei; Chang, Chih-hung

    2016-01-01

    Bubble nucleation control, growth and departure dynamics is important in understanding boiling phenomena and enhancing nucleate boiling heat transfer performance. We report a novel bi-functional heterogeneous surface structure that is capable of tuning bubble nucleation, growth and departure dynamics. For the fabrication of the surface, hydrophobic polymer dot arrays are first printed on a substrate, followed by hydrophilic ZnO nanostructure deposition via microreactor-assisted nanomaterial deposition (MAND) processing. Wettability contrast between the hydrophobic polymer dot arrays and aqueous ZnO solution allows for the fabrication of heterogeneous surfaces with distinct wettability regions. Heterogeneous surfaces with various configurations were fabricated and their bubble dynamics were examined at elevated heat flux, revealing various nucleate boiling phenomena. In particular, aligned and patterned bubbles with a tunable departure frequency and diameter were demonstrated in a boiling experiment for the first time. Taking advantage of our fabrication method, a 6 inch wafer size heterogeneous surface was prepared. Pool boiling experiments were also performed to demonstrate a heat flux enhancement up to 3X at the same surface superheat using bi-functional surfaces, compared to a bare stainless steel surface. PMID:27034255

  4. Large-scale Generation of Patterned Bubble Arrays on Printed Bi-functional Boiling Surfaces

    PubMed Central

    Choi, Chang-Ho; David, Michele; Gao, Zhongwei; Chang, Alvin; Allen, Marshall; Wang, Hailei; Chang, Chih-hung

    2016-01-01

    Bubble nucleation control, growth and departure dynamics is important in understanding boiling phenomena and enhancing nucleate boiling heat transfer performance. We report a novel bi-functional heterogeneous surface structure that is capable of tuning bubble nucleation, growth and departure dynamics. For the fabrication of the surface, hydrophobic polymer dot arrays are first printed on a substrate, followed by hydrophilic ZnO nanostructure deposition via microreactor-assisted nanomaterial deposition (MAND) processing. Wettability contrast between the hydrophobic polymer dot arrays and aqueous ZnO solution allows for the fabrication of heterogeneous surfaces with distinct wettability regions. Heterogeneous surfaces with various configurations were fabricated and their bubble dynamics were examined at elevated heat flux, revealing various nucleate boiling phenomena. In particular, aligned and patterned bubbles with a tunable departure frequency and diameter were demonstrated in a boiling experiment for the first time. Taking advantage of our fabrication method, a 6 inch wafer size heterogeneous surface was prepared. Pool boiling experiments were also performed to demonstrate a heat flux enhancement up to 3X at the same surface superheat using bi-functional surfaces, compared to a bare stainless steel surface. PMID:27034255

  5. Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application

    PubMed Central

    Zhang, Ping; Li, Wenjun; Sun, Hua

    2016-01-01

    Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747

  6. Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application.

    PubMed

    Zhang, Ping; Li, Wenjun; Sun, Hua

    2016-01-01

    Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747

  7. Efficient derivation of functional dopaminergic neurons from human embryonic stem cells on a large scale.

    PubMed

    Cho, Myung-Soo; Hwang, Dong-Youn; Kim, Dong-Wook

    2008-01-01

    Cell-replacement therapy using human embryonic stem cells (hESCs) holds great promise in treating Parkinson's disease. We have recently reported a highly efficient method to generate functional dopaminergic (DA) neurons from hESCs. Our method includes a unique step, the formation of spherical neural masses (SNMs), and offers the highest yield of DA neurons ever achieved so far. In this report, we describe our method step by step, covering not only how to differentiate hESCs into DA neurons at a high yield, but also how to amplify, freeze and thaw the SNMs, which are the key structures that make our protocol unique and advantageous. Although the whole process of generation of DA neurons from hESCs takes about 2 months, only 14 d are needed to derive DA neurons from the SNMs.

  8. Large-Scale and Comprehensive Immune Profiling and Functional Analysis of Normal Human Aging

    PubMed Central

    Whiting, Chan C.; Siebert, Janet; Newman, Aaron M.; Du, Hong-wu; Alizadeh, Ash A.; Goronzy, Jorg; Weyand, Cornelia M.; Krishnan, Eswar; Fathman, C. Garrison; Maecker, Holden T.

    2015-01-01

    While many age-associated immune changes have been reported, a comprehensive set of metrics of immune aging is lacking. Here we report data from 243 healthy adults aged 40–97, for whom we measured clinical and functional parameters, serum cytokines, cytokines and gene expression in stimulated and unstimulated PBMC, PBMC phenotypes, and cytokine-stimulated pSTAT signaling in whole blood. Although highly heterogeneous across individuals, many of these assays revealed trends by age, sex, and CMV status, to greater or lesser degrees. Age, then sex and CMV status, showed the greatest impact on the immune system, as measured by the percentage of assay readouts with significant differences. An elastic net regression model could optimally predict age with 14 analytes from different assays. This reinforces the importance of multivariate analysis for defining a healthy immune system. These data provide a reference for others measuring immune parameters in older people. PMID:26197454

  9. The iBeetle large-scale RNAi screen reveals gene functions for insect development and physiology.

    PubMed

    Schmitt-Engel, Christian; Schultheis, Dorothea; Schwirz, Jonas; Ströhlein, Nadi; Troelenberg, Nicole; Majumdar, Upalparna; Dao, Van Anh; Grossmann, Daniela; Richter, Tobias; Tech, Maike; Dönitz, Jürgen; Gerischer, Lizzy; Theis, Mirko; Schild, Inga; Trauner, Jochen; Koniszewski, Nikolaus D B; Küster, Elke; Kittelmann, Sebastian; Hu, Yonggang; Lehmann, Sabrina; Siemanowski, Janna; Ulrich, Julia; Panfilio, Kristen A; Schröder, Reinhard; Morgenstern, Burkhard; Stanke, Mario; Buchhholz, Frank; Frasch, Manfred; Roth, Siegfried; Wimmer, Ernst A; Schoppmeier, Michael; Klingler, Martin; Bucher, Gregor

    2015-01-01

    Genetic screens are powerful tools to identify the genes required for a given biological process. However, for technical reasons, comprehensive screens have been restricted to very few model organisms. Therefore, although deep sequencing is revealing the genes of ever more insect species, the functional studies predominantly focus on candidate genes previously identified in Drosophila, which is biasing research towards conserved gene functions. RNAi screens in other organisms promise to reduce this bias. Here we present the results of the iBeetle screen, a large-scale, unbiased RNAi screen in the red flour beetle, Tribolium castaneum, which identifies gene functions in embryonic and postembryonic development, physiology and cell biology. The utility of Tribolium as a screening platform is demonstrated by the identification of genes involved in insect epithelial adhesion. This work transcends the restrictions of the candidate gene approach and opens fields of research not accessible in Drosophila. PMID:26215380

  10. The iBeetle large-scale RNAi screen reveals gene functions for insect development and physiology.

    PubMed

    Schmitt-Engel, Christian; Schultheis, Dorothea; Schwirz, Jonas; Ströhlein, Nadi; Troelenberg, Nicole; Majumdar, Upalparna; Dao, Van Anh; Grossmann, Daniela; Richter, Tobias; Tech, Maike; Dönitz, Jürgen; Gerischer, Lizzy; Theis, Mirko; Schild, Inga; Trauner, Jochen; Koniszewski, Nikolaus D B; Küster, Elke; Kittelmann, Sebastian; Hu, Yonggang; Lehmann, Sabrina; Siemanowski, Janna; Ulrich, Julia; Panfilio, Kristen A; Schröder, Reinhard; Morgenstern, Burkhard; Stanke, Mario; Buchhholz, Frank; Frasch, Manfred; Roth, Siegfried; Wimmer, Ernst A; Schoppmeier, Michael; Klingler, Martin; Bucher, Gregor

    2015-07-28

    Genetic screens are powerful tools to identify the genes required for a given biological process. However, for technical reasons, comprehensive screens have been restricted to very few model organisms. Therefore, although deep sequencing is revealing the genes of ever more insect species, the functional studies predominantly focus on candidate genes previously identified in Drosophila, which is biasing research towards conserved gene functions. RNAi screens in other organisms promise to reduce this bias. Here we present the results of the iBeetle screen, a large-scale, unbiased RNAi screen in the red flour beetle, Tribolium castaneum, which identifies gene functions in embryonic and postembryonic development, physiology and cell biology. The utility of Tribolium as a screening platform is demonstrated by the identification of genes involved in insect epithelial adhesion. This work transcends the restrictions of the candidate gene approach and opens fields of research not accessible in Drosophila.

  11. Quasi-periodic patterns (QPP): large-scale dynamics in resting state fMRI that correlate with local infraslow electrical activity.

    PubMed

    Thompson, Garth John; Pan, Wen-Ju; Magnuson, Matthew Evan; Jaeger, Dieter; Keilholz, Shella Dawn

    2014-01-01

    Functional connectivity measurements from resting state blood-oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) are proving a powerful tool to probe both normal brain function and neuropsychiatric disorders. However, the neural mechanisms that coordinate these large networks are poorly understood, particularly in the context of the growing interest in network dynamics. Recent work in anesthetized rats has shown that the spontaneous BOLD fluctuations are tightly linked to infraslow local field potentials (LFPs) that are seldom recorded but comparable in frequency to the slow BOLD fluctuations. These findings support the hypothesis that long-range coordination involves low frequency neural oscillations and establishes infraslow LFPs as an excellent candidate for probing the neural underpinnings of the BOLD spatiotemporal patterns observed in both rats and humans. To further examine the link between large-scale network dynamics and infraslow LFPs, simultaneous fMRI and microelectrode recording were performed in anesthetized rats. Using an optimized filter to isolate shared components of the signals, we found that time-lagged correlation between infraslow LFPs and BOLD is comparable in spatial extent and timing to a quasi-periodic pattern (QPP) found from BOLD alone, suggesting that fMRI-measured QPPs and the infraslow LFPs share a common mechanism. As fMRI allows spatial resolution and whole brain coverage not available with electroencephalography, QPPs can be used to better understand the role of infraslow oscillations in normal brain function and neurological or psychiatric disorders.

  12. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  13. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study

    PubMed Central

    Kumar, G. Vinodh; Halder, Tamesh; Jaiswal, Amit K.; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300–600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus

  14. Decoding the Role of the Insula in Human Cognition: Functional Parcellation and Large-Scale Reverse Inference

    PubMed Central

    Yarkoni, Tal; Khaw, Mel Win; Sanfey, Alan G.

    2013-01-01

    Recent work has indicated that the insula may be involved in goal-directed cognition, switching between networks, and the conscious awareness of affect and somatosensation. However, these findings have been limited by the insula’s remarkably high base rate of activation and considerable functional heterogeneity. The present study used a relatively unbiased data-driven approach combining resting-state connectivity-based parcellation of the insula with large-scale meta-analysis to understand how the insula is anatomically organized based on functional connectivity patterns as well as the consistency and specificity of the associated cognitive functions. Our findings support a tripartite subdivision of the insula and reveal that the patterns of functional connectivity in the resting-state analysis appear to be relatively conserved across tasks in the meta-analytic coactivation analysis. The function of the networks was meta-analytically “decoded” using the Neurosynth framework and revealed that while the dorsoanterior insula is more consistently involved in human cognition than ventroanterior and posterior networks, each parcellated network is specifically associated with a distinct function. Collectively, this work suggests that the insula is instrumental in integrating disparate functional systems involved in processing affect, sensory-motor processing, and general cognition and is well suited to provide an interface between feelings, cognition, and action. PMID:22437053

  15. Ensuring clinical utility and function in a large scale national project in Australia by embedding clinical informatics into design.

    PubMed

    Pearce, Christopher; Macdougall, Cecily; Bainbridge, Michael; Davidson, Jane

    2013-01-01

    Across the globe, healthcare delivery is being transformed by electronic sharing of health information. Such large scale health projects with a national focus are a challenge to design and implement. Delivering clinical outcomes in the context of policy, technical, and design environments represents a particular challenge. On July 1, 2012, Australia delivered the first stage of a personally controlled electronic health record - a national program for sharing a variety of health information between health professionals and between health professionals and consumers. As build of the system commenced, deficiencies of the traditional stakeholder consultation model were identified and replaced by a more structured approach, called clinical functional assurance. Utilising clinical scenarios linked to detailed design requirements, a team of clinicians certified clinical utility at implementation and release points.

  16. The Joint Statistics of California Temperature and Precipitation as a Function of the Large-scale State of the Climate

    NASA Astrophysics Data System (ADS)

    OBrien, J. P.; O'Brien, T. A.

    2015-12-01

    Single climatic extremes have a strong and disproportionate effect on society and the natural environment. However, the joint occurrence of two or more concurrent extremes has the potential to negatively impact these areas of life in ways far greater than any single event could. California, USA, home to nearly 40 million people and the largest agricultural producer in the United States, is currently experiencing an extreme drought, which has persisted for several years. While drought is commonly thought of in terms of only precipitation deficits, above average temperatures co-occurring with precipitation deficits greatly exacerbate drought conditions. The 2014 calendar year in California was characterized both by extremely low precipitation and extremely high temperatures, which has significantly deepened the already extreme drought conditions leading to severe water shortages and wildfires. While many studies have shown the statistics of 2014 temperature and precipitation anomalies as outliers, none have demonstrated a connection with large-scale, long-term climate trends, which would provide useful relationships for predicting the future trajectory of California climate and water resources. We focus on understanding non-stationarity in the joint distribution of California temperature and precipitation anomalies in terms of large-scale, low-frequency trends in climate such as global mean temperature rise and oscillatory indices such as ENSO and the Pacific Decadal Oscillation among others. We consider temperature and precipitation data from the seven distinct climate divisions in California and employ a novel, high-fidelity kernel density estimation method to directly infer the multivariate distribution of temperature and precipitation anomalies conditioned on the large-scale state of the climate. We show that the joint distributions and associated statistics of temperature and precipitation are non-stationary and vary regionally in California. Further, we show

  17. Large-scale functional network reorganization in 22q11.2 deletion syndrome revealed by modularity analysis.

    PubMed

    Scariati, Elisa; Schaer, Marie; Karahanoglu, Isik; Schneider, Maude; Richiardi, Jonas; Debbané, Martin; Van De Ville, Dimitri; Eliez, Stephan

    2016-09-01

    The 22q11.2 deletion syndrome (22q11DS) is associated with cognitive impairments and a 41% risk of developing schizophrenia. While several studies performed on patients with 22q11DS showed the presence of abnormal functional connectivity in this syndrome, how these alterations affect large-scale network organization is still unknown. Here we performed a network modularity analysis on whole-brain functional connectomes derived from the resting-state fMRI of 40 patients with 22q11DS and 41 healthy control participants, aged between 9 and 30 years old. We then split the sample at 18 years old to obtain two age subgroups and repeated the modularity analyses. We found alterations of modular communities affecting the visuo-spatial network and the anterior cingulate cortex (ACC) in both age groups. These results corroborate previous structural and functional studies in 22q11DS that showed early impairment of visuo-spatial processing regions. Furthermore, as ACC has been linked to the development of psychotic symptoms in 22q11DS, the early impairment of its functional connectivity provide further support that ACC alterations may provide potential biomarkers for an increased risk of schizophrenia. Finally, we found an abnormal modularity partition of the dorsolateral prefrontal cortex (DLPFC) only in adults with 22q11DS, suggesting the presence of an abnormal development of functional network communities during adolescence in 22q11DS. PMID:27371790

  18. Low-cost and large-scale synthesis of functional porous materials for phosphate removal with high performance

    NASA Astrophysics Data System (ADS)

    Emmanuelawati, Irene; Yang, Jie; Zhang, Jun; Zhang, Hongwei; Zhou, Liang; Yu, Chengzhong

    2013-06-01

    A facile spray drying technique has been developed for large-scale and template-free production of nanoporous silica with controlled morphology, large pore size, and high pore volume, using commercially available fumed silica, Aerosil 200, as a sole precursor. This approach can be applied to the preparation of functional nanoporous materials, in this study, lanthanum oxide functionalised silica microspheres by introducing lanthanum nitrate in situ during the spray drying process and followed by a post-calcination process. The resultant lanthanum functionalised Aerosil microspheres manifest high phosphate adsorption capacity (up to 2.317 mmol g-1), fast kinetics, and excellent adsorption performance at a low phosphate concentration (1 mg L-1). In virtue of the easy and scalable synthesis method, low cost and high performances of the product, the materials we reported here are promising for water treatment. Our approach may be general and extended to the synthesis of other functional nanoporous materials with versatile applications.A facile spray drying technique has been developed for large-scale and template-free production of nanoporous silica with controlled morphology, large pore size, and high pore volume, using commercially available fumed silica, Aerosil 200, as a sole precursor. This approach can be applied to the preparation of functional nanoporous materials, in this study, lanthanum oxide functionalised silica microspheres by introducing lanthanum nitrate in situ during the spray drying process and followed by a post-calcination process. The resultant lanthanum functionalised Aerosil microspheres manifest high phosphate adsorption capacity (up to 2.317 mmol g-1), fast kinetics, and excellent adsorption performance at a low phosphate concentration (1 mg L-1). In virtue of the easy and scalable synthesis method, low cost and high performances of the product, the materials we reported here are promising for water treatment. Our approach may be general and

  19. Transcranial direct current stimulation changes resting state functional connectivity: A large-scale brain network modeling study.

    PubMed

    Kunze, Tim; Hunold, Alexander; Haueisen, Jens; Jirsa, Viktor; Spiegler, Andreas

    2016-10-15

    Transcranial direct current stimulation (tDCS) is a noninvasive technique for affecting brain dynamics with promising application in the clinical therapy of neurological and psychiatric disorders such as Parkinson's disease, Alzheimer's disease, depression, and schizophrenia. Resting state dynamics increasingly play a role in the assessment of connectivity-based pathologies such as Alzheimer's and schizophrenia. We systematically applied tDCS in a large-scale network model of 74 cerebral areas, investigating the spatiotemporal changes in dynamic states as a function of structural connectivity changes. Structural connectivity was defined by the human connectome. The main findings of this study are fourfold: Firstly, we found a tDCS-induced increase in functional connectivity among cerebral areas and among EEG sensors, where the latter reproduced empirical findings of other researchers. Secondly, the analysis of the network dynamics suggested synchronization to be the main mechanism of the observed effects. Thirdly, we found that tDCS sharpens and shifts the frequency distribution of scalp EEG sensors slightly towards higher frequencies. Fourthly, new dynamic states emerged through interacting areas in the network compared to the dynamics of an isolated area. The findings propose synchronization as a key mechanism underlying the changes in the spatiotemporal pattern formation due to tDCS. Our work supports the notion that noninvasive brain stimulation is able to bias brain dynamics by affecting the competitive interplay of functional subnetworks.

  20. The integration of large-scale neural network modeling and functional brain imaging in speech motor control

    PubMed Central

    Golfinopoulos, E.; Tourville, J.A.; Guenther, F.H.

    2009-01-01

    Speech production demands a number of integrated processing stages. The system must encode the speech motor programs that command movement trajectories of the articulators and monitor transient spatiotemporal variations in auditory and somatosensory feedback. Early models of this system proposed that independent neural regions perform specialized speech processes. As technology advanced, neuroimaging data revealed that the dynamic sensorimotor processes of speech require a distributed set of interacting neural regions. The DIVA (Directions into Velocities of Articulators) neurocomputational model elaborates on early theories, integrating existing data and contemporary ideologies, to provide a mechanistic account of acoustic, kinematic, and functional magnetic resonance imaging (fMRI) data on speech acquisition and production. This large-scale neural network model is composed of several interconnected components whose cell activities and synaptic weight strengths are governed by differential equations. Cells in the model are associated with neuroanatomical substrates and have been mapped to locations in Montreal Neurological Institute stereotactic space, providing a means to compare simulated and empirical fMRI data. The DIVA model also provides a computational and neurophysiological framework within which to interpret and organize research on speech acquisition and production in fluent and dysfluent child and adult speakers. The purpose of this review article is to demonstrate how the DIVA model is used to motivate and guide functional imaging studies. We describe how model predictions are evaluated using voxel-based, region-of-interest-based parametric analyses and inter-regional effective connectivity modeling of fMRI data. PMID:19837177

  1. Large-Scale Chromatin Structure-Function Relationships during the Cell Cycle and Development: Insights from Replication Timing.

    PubMed

    Dileep, Vishnu; Rivera-Mulia, Juan Carlos; Sima, Jiao; Gilbert, David M

    2015-01-01

    Chromosome architecture has received a lot of attention since the recent development of genome-scale methods to measure chromatin interactions (Hi-C), enabling the first sequence-based models of chromosome tertiary structure. A view has emerged of chromosomes as a string of structural units (topologically associating domains; TADs) whose boundaries persist through the cell cycle and development. TADs with similar chromatin states tend to aggregate, forming spatially segregated chromatin compartments. However, high-resolution Hi-C has revealed substructure within TADs (subTADs) that poses a challenge for models that attribute significance to structural units at any given scale. More than 20 years ago, the DNA replication field independently identified stable structural (and functional) units of chromosomes (replication foci) as well as spatially segregated chromatin compartments (early and late foci), but lacked the means to link these units to genomic map units. Genome-wide studies of replication timing (RT) have now merged these two disciplines by identifying individual units of replication regulation (replication domains; RDs) that correspond to TADs and are arranged in 3D to form spatiotemporally segregated subnuclear compartments. Furthermore, classifying RDs/TADs by their constitutive versus developmentally regulated RT has revealed distinct classes of chromatin organization, providing unexpected insight into the relationship between large-scale chromosome structure and function. PMID:26590169

  2. The integration of large-scale neural network modeling and functional brain imaging in speech motor control.

    PubMed

    Golfinopoulos, E; Tourville, J A; Guenther, F H

    2010-09-01

    Speech production demands a number of integrated processing stages. The system must encode the speech motor programs that command movement trajectories of the articulators and monitor transient spatiotemporal variations in auditory and somatosensory feedback. Early models of this system proposed that independent neural regions perform specialized speech processes. As technology advanced, neuroimaging data revealed that the dynamic sensorimotor processes of speech require a distributed set of interacting neural regions. The DIVA (Directions into Velocities of Articulators) neurocomputational model elaborates on early theories, integrating existing data and contemporary ideologies, to provide a mechanistic account of acoustic, kinematic, and functional magnetic resonance imaging (fMRI) data on speech acquisition and production. This large-scale neural network model is composed of several interconnected components whose cell activities and synaptic weight strengths are governed by differential equations. Cells in the model are associated with neuroanatomical substrates and have been mapped to locations in Montreal Neurological Institute stereotactic space, providing a means to compare simulated and empirical fMRI data. The DIVA model also provides a computational and neurophysiological framework within which to interpret and organize research on speech acquisition and production in fluent and dysfluent child and adult speakers. The purpose of this review article is to demonstrate how the DIVA model is used to motivate and guide functional imaging studies. We describe how model predictions are evaluated using voxel-based, region-of-interest-based parametric analyses and inter-regional effective connectivity modeling of fMRI data.

  3. Large-Scale Genome-Wide Association Studies and Meta-Analyses of Longitudinal Change in Adult Lung Function

    PubMed Central

    Tang, Wenbo; Kowgier, Matthew; Loth, Daan W.; Soler Artigas, María; Joubert, Bonnie R.; Hodge, Emily; Gharib, Sina A.; Smith, Albert V.; Ruczinski, Ingo; Gudnason, Vilmundur; Mathias, Rasika A.; Harris, Tamara B.; Hansel, Nadia N.; Launer, Lenore J.; Barnes, Kathleen C.; Hansen, Joyanna G.; Albrecht, Eva; Aldrich, Melinda C.; Allerhand, Michael; Barr, R. Graham; Brusselle, Guy G.; Couper, David J.; Curjuric, Ivan; Davies, Gail; Deary, Ian J.; Dupuis, Josée; Fall, Tove; Foy, Millennia; Franceschini, Nora; Gao, Wei; Gläser, Sven; Gu, Xiangjun; Hancock, Dana B.; Heinrich, Joachim; Hofman, Albert; Imboden, Medea; Ingelsson, Erik; James, Alan; Karrasch, Stefan; Koch, Beate; Kritchevsky, Stephen B.; Kumar, Ashish; Lahousse, Lies; Li, Guo; Lind, Lars; Lindgren, Cecilia; Liu, Yongmei; Lohman, Kurt; Lumley, Thomas; McArdle, Wendy L.; Meibohm, Bernd; Morris, Andrew P.; Morrison, Alanna C.; Musk, Bill; North, Kari E.; Palmer, Lyle J.; Probst-Hensch, Nicole M.; Psaty, Bruce M.; Rivadeneira, Fernando; Rotter, Jerome I.; Schulz, Holger; Smith, Lewis J.; Sood, Akshay; Starr, John M.; Strachan, David P.; Teumer, Alexander; Uitterlinden, André G.; Völzke, Henry; Voorman, Arend; Wain, Louise V.; Wells, Martin T.; Wilk, Jemma B.; Williams, O. Dale; Heckbert, Susan R.; Stricker, Bruno H.; London, Stephanie J.; Fornage, Myriam; Tobin, Martin D.; O′Connor, George T.; Hall, Ian P.; Cassano, Patricia A.

    2014-01-01

    Background Genome-wide association studies (GWAS) have identified numerous loci influencing cross-sectional lung function, but less is known about genes influencing longitudinal change in lung function. Methods We performed GWAS of the rate of change in forced expiratory volume in the first second (FEV1) in 14 longitudinal, population-based cohort studies comprising 27,249 adults of European ancestry using linear mixed effects model and combined cohort-specific results using fixed effect meta-analysis to identify novel genetic loci associated with longitudinal change in lung function. Gene expression analyses were subsequently performed for identified genetic loci. As a secondary aim, we estimated the mean rate of decline in FEV1 by smoking pattern, irrespective of genotypes, across these 14 studies using meta-analysis. Results The overall meta-analysis produced suggestive evidence for association at the novel IL16/STARD5/TMC3 locus on chromosome 15 (P  =  5.71 × 10-7). In addition, meta-analysis using the five cohorts with ≥3 FEV1 measurements per participant identified the novel ME3 locus on chromosome 11 (P  =  2.18 × 10-8) at genome-wide significance. Neither locus was associated with FEV1 decline in two additional cohort studies. We confirmed gene expression of IL16, STARD5, and ME3 in multiple lung tissues. Publicly available microarray data confirmed differential expression of all three genes in lung samples from COPD patients compared with controls. Irrespective of genotypes, the combined estimate for FEV1 decline was 26.9, 29.2 and 35.7 mL/year in never, former, and persistent smokers, respectively. Conclusions In this large-scale GWAS, we identified two novel genetic loci in association with the rate of change in FEV1 that harbor candidate genes with biologically plausible functional links to lung function. PMID:24983941

  4. Development and in silico evaluation of large-scale metabolite identification methods using functional group detection for metabolomics.

    PubMed

    Mitchell, Joshua M; Fan, Teresa W-M; Lane, Andrew N; Moseley, Hunter N B

    2014-01-01

    Large-scale identification of metabolites is key to elucidating and modeling metabolism at the systems level. Advances in metabolomics technologies, particularly ultra-high resolution mass spectrometry (MS) enable comprehensive and rapid analysis of metabolites. However, a significant barrier to meaningful data interpretation is the identification of a wide range of metabolites including unknowns and the determination of their role(s) in various metabolic networks. Chemoselective (CS) probes to tag metabolite functional groups combined with high mass accuracy provide additional structural constraints for metabolite identification and quantification. We have developed a novel algorithm, Chemically Aware Substructure Search (CASS) that efficiently detects functional groups within existing metabolite databases, allowing for combined molecular formula and functional group (from CS tagging) queries to aid in metabolite identification without a priori knowledge. Analysis of the isomeric compounds in both Human Metabolome Database (HMDB) and KEGG Ligand demonstrated a high percentage of isomeric molecular formulae (43 and 28%, respectively), indicating the necessity for techniques such as CS-tagging. Furthermore, these two databases have only moderate overlap in molecular formulae. Thus, it is prudent to use multiple databases in metabolite assignment, since each major metabolite database represents different portions of metabolism within the biosphere. In silico analysis of various CS-tagging strategies under different conditions for adduct formation demonstrate that combined FT-MS derived molecular formulae and CS-tagging can uniquely identify up to 71% of KEGG and 37% of the combined KEGG/HMDB database vs. 41 and 17%, respectively without adduct formation. This difference between database isomer disambiguation highlights the strength of CS-tagging for non-lipid metabolite identification. However, unique identification of complex lipids still needs additional

  5. Large-scale deformed quasiparticle random-phase approximation calculations of the γ -ray strength function using the Gogny force

    NASA Astrophysics Data System (ADS)

    Martini, M.; Péru, S.; Hilaire, S.; Goriely, S.; Lechaftois, F.

    2016-07-01

    Valuable theoretical predictions of nuclear dipole excitations in the whole chart are of great interest for different nuclear applications, including in particular nuclear astrophysics. Here we present large-scale calculations of the E 1 γ -ray strength function obtained in the framework of the axially symmetric deformed quasiparticle random-phase approximation based on the finite-range Gogny force. This approach is applied to even-even nuclei, the strength function for odd nuclei being derived by interpolation. The convergence with respect to the adopted number of harmonic oscillator shells and the cutoff energy introduced in the 2-quasiparticle (2 -q p ) excitation space is analyzed. The calculations performed with two different Gogny interactions, namely D1S and D1M, are compared. A systematic energy shift of the E 1 strength is found for D1M relative to D1S, leading to a lower energy centroid and a smaller energy-weighted sum rule for D1M. When comparing with experimental photoabsorption data, the Gogny-QRPA predictions are found to overestimate the giant dipole energy by typically ˜2 MeV. Despite the microscopic nature of our self-consistent Hartree-Fock-Bogoliubov plus QRPA calculation, some phenomenological corrections need to be included to take into account the effects beyond the standard 2 -q p QRPA excitations and the coupling between the single-particle and low-lying collective phonon degrees of freedom. For this purpose, three prescriptions of folding procedure are considered and adjusted to reproduce experimental photoabsorption data at best. All of them are shown to lead to somewhat similar predictions of the E 1 strength, both at low energies and for exotic neutron-rich nuclei. Predictions of γ -ray strength functions and Maxwellian-averaged neutron capture rates for the whole Sn isotopic chain are also discussed and compared with previous theoretical calculations.

  6. Large scale characterization of unsaturated soil properties in a semi-arid region combining infiltration, pedotransfer functions and evaporation tests

    NASA Astrophysics Data System (ADS)

    Shabou, Marouen; Angulo-Jaramillo, Rafael; Lassabatère, Laurent; Boulet, Gilles; Mougenot, Bernard; Lili Chabaane, Zohra; Zribi, Mehrez

    2016-04-01

    Water resource management is a major issue in semi-arid regions, especially where irrigated agriculture is dominant on soils with highly variable clay content. Indeed, topsoil clay content has a significant importance on infiltration and evaporation processes and therefore in the estimation of the volume of water needed for crops. In this poster we present several methods to estimate wilting point, field capacity volumetric water contents and saturated hydraulic conductivity of the Kairouan plain (680 km2), central Tunisia (North Africa). The first method relies on the Beerkan Estimation of Soil Transfer parameters (BEST) method, which consists in local estimate of unsaturated soil hydraulic properties from a single-ring infiltration test, combined with the use of pedotransfer functions applied to the Kairouan plain different soil types. Results are obtained over six different topsoil texture classes along the Kairouan plain. Saturated hydraulic conductivity is high for coarse textured and some of the fine textured soils due to shrinkage cracking-macropore soil structure. The saturated hydraulic conductivity values are respectively 1.31E-5 m.s-1 and 1.71E-05 m.s-1. The second method is based on evaporation tests on different test plots. It consists of analyzing soil moisture profile changes during the dry down periods to detect the time-to-stress that can be obtained from observation of soil moisture variation, albedo measurements and variation of soil temperature. Results show that the estimated parameters with the evaporation method are close to those obtained by combining the BEST method and pedotransfer functions. The results validate that combining local infiltration tests and pedotransfer functions is a promising tool for the large scale hydraulic characterization of region with strong spatial variability of soils properties.

  7. Local unitary transformation method toward practical electron correlation calculations with scalar relativistic effect in large-scale molecules

    NASA Astrophysics Data System (ADS)

    Seino, Junji; Nakai, Hiromi

    2013-07-01

    In order to perform practical electron correlation calculations, the local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012), 10.1063/1.4729463; J. Seino and H. Nakai, J. Chem. Phys. 137, 144101 (2012)], 10.1063/1.4757263, which is based on the locality of relativistic effects, has been combined with the linear-scaling divide-and-conquer (DC)-based Hartree-Fock (HF) and electron correlation methods, such as the second-order Møller-Plesset (MP2) and the coupled cluster theories with single and double excitations (CCSD). Numerical applications in hydrogen halide molecules, (HX)n (X = F, Cl, Br, and I), coinage metal chain systems, Mn (M = Cu and Ag), and platinum-terminated polyynediyl chain, trans,trans-{(p-CH3C6H4)3P}2(C6H5)Pt(C≡C)4Pt(C6H5){(p-CH3C6H4)3P}2, clarified that the present methods, namely DC-HF, MP2, and CCSD with the LUT-IODKH Hamiltonian, reproduce the results obtained using conventional methods with small computational costs. The combination of both LUT and DC techniques could be the first approach that achieves overall quasi-linear-scaling with a small prefactor for relativistic electron correlation calculations.

  8. Positive-selection and ligation-independent cloning vectors for large scale in planta expression for plant functional genomics.

    PubMed

    Oh, Sang-Keun; Kim, Saet-Byul; Yeom, Seon-In; Lee, Hyun-Ah; Choi, Doil

    2010-12-01

    Transient expression is an easy, rapid and powerful technique for producing proteins of interest in plants. Recombinational cloning is highly efficient but has disadvantages, including complicated, time consuming cloning procedures and expensive enzymes for large-scale gene cloning. To overcome these limitations, we developed new ligation-independent cloning (LIC) vectors derived from binary vectors including tobacco mosaic virus (pJL-TRBO), potato virus X (pGR106) and the pBI121 vector-based pMBP1. LIC vectors were modified to enable directional cloning of PCR products without restriction enzyme digestion or ligation reactions. In addition, the ccdB gene, which encodes a potent cell-killing protein, was introduced between the two LIC adapter sites in the pJL-LIC, pGR-LIC, and pMBP-LIC vectors for the efficient selection of recombinant clones. This new vector does not require restriction enzymes, alkaline phosphatase, or DNA ligase for cloning. To clone, the three LIC vectors are digested with SnaBI and treated with T4 DNA polymerase, which includes 3' to 5' exonuclease activity in the presence of only one dNTP (dGTP for the inserts and dCTP for the vector). To make recombinants, the vector plasmid and the insert PCR fragment were annealed at room temperature for 20 min prior to transformation into the host. Bacterial transformation was accomplished with 100% efficiency. To validate the new LIC vector systems, we were used to coexpressed the Phytophthora AVR and potato resistance (R) genes in N. benthamiana by infiltration of Agrobacterium. Coexpressed AVR and R genes in N. benthamiana induced the typical hypersensitive cell death resulting from in vivo interaction of the two proteins. These LIC vectors could be efficiently used for high-throughput cloning and laboratory-scale in planta expression. These vectors could provide a powerful tool for high-throughput transient expression assays for functional genomic studies in plants. PMID:21340673

  9. On using large scale correlation of the Ly-α forest and redshifted 21-cm signal to probe HI distribution during the post reionization era

    SciTech Connect

    Sarkar, Tapomoy Guha; Datta, Kanan K. E-mail: kanan.physics@presiuniv.ac.in

    2015-08-01

    We investigate the possibility of detecting the 3D cross correlation power spectrum of the Ly-α forest and HI 21 cm signal from the post reionization epoch. (The cross-correlation signal is directly dependent on the dark matter power spectrum and is sensitive to the 21-cm brightness temperature and Ly-α forest biases. These bias parameters dictate the strength of anisotropy in redshift space.) We find that the cross-correlation power spectrum can be detected using 400 hrs observation with SKA-mid (phase 1) and a futuristic BOSS like experiment with a quasar (QSO) density of 30 deg{sup −2} at a peak SNR of 15 for a single field experiment at redshift z = 2.5. on large scales using the linear bias model. We also study the possibility of constraining various bias parameters using the cross power spectrum. We find that with the same experiment 1 σ (conditional errors) on the 21-cm linear redshift space distortion parameter β{sub T} and β{sub F} corresponding to the Ly-α  forest are ∼ 2.7 % and ∼ 1.4 % respectively for 01 independent pointings of the SKA-mid (phase 1). This prediction indicates a significant improvement over existing measurements. We claim that the detection of the 3D cross correlation power spectrum will not only ascertain the cosmological origin of the signal in presence of astrophysical foregrounds but will also provide stringent constraints on large scale HI biases. This provides an independent probe towards understanding cosmological structure formation.

  10. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  11. Metaproteomics reveals major microbial players and their biodegradation functions in a large-scale aerobic composting plant

    PubMed Central

    Liu, Dongming; Li, Mingxiao; Xi, Beidou; Zhao, Yue; Wei, Zimin; Song, Caihong; Zhu, Chaowei

    2015-01-01

    Composting is an appropriate management alternative for municipal solid waste; however, our knowledge about the microbial regulation of this process is still scare. We employed metaproteomics to elucidate the main biodegradation pathways in municipal solid waste composting system across the main phases in a large-scale composting plant. The investigation of microbial succession revealed that Bacillales, Actinobacteria and Saccharomyces increased significantly with respect to abundance in composting process. The key microbiologic population for cellulose degradation in different composting stages was different. Fungi were found to be the main producers of cellulase in earlier phase. However, the cellulolytic fungal communities were gradually replaced by a purely bacterial one in active phase, which did not support the concept that the thermophilic fungi are active through the thermophilic phase. The effective decomposition of cellulose required the synergy between bacteria and fungi in the curing phase. PMID:25989417

  12. Functions of slags and gravels as substrates in large-scale demonstration constructed wetland systems for polluted river water treatment.

    PubMed

    Ge, Yuan; Wang, Xiaochang; Zheng, Yucong; Dzakpasu, Mawuli; Zhao, Yaqian; Xiong, Jiaqing

    2015-09-01

    The choice of substrates with high adsorption capacity, yet readily available and economical is vital for sustainable pollutants removal in constructed wetlands (CWs). Two identical large-scale demonstration horizontal subsurface flow (HSSF) CWs (surface area, 340 m(2); depth, 0.6 m; HLR, 0.2 m/day) with gravel or slag substrates were evaluated for their potential use in remediating polluted urban river water in the prevailing climate of northwest China. Batch experiments to elucidate phosphorus adsorption mechanisms indicated a higher adsorption capacity of slag (3.15 g/kg) than gravel (0.81 g/kg), whereby circa 20 % more total phosphorus (TP) removal was recorded in HSSF-slag than HSSF-gravel. TP removal occurred predominantly via CaO-slag dissolution followed by Ca phosphate precipitation. Moreover, average removals of chemical oxygen demand and biochemical oxygen demand were approximately 10 % higher in HSSF-slag than HSSF-gravel. Nevertheless, TP adsorption by slag seemed to get quickly saturated over the monitoring period, and the removal efficiency of the HSSF-slag approached that of the HSSF-gravel after 1-year continuous operation. In contrast, the two CWs achieved similar nitrogen removal during the 2-year monitoring period. Findings also indicated that gravel provided better support for the development of other wetland components such as biomass, whereby the biomass production and the amount of total nitrogen (TN; 43.1-59.0 g/m(2)) and TP (4.15-5.75 g/m(2)) assimilated by local Phragmites australis in HSSF-gravel were higher than that in HSSF-slag (41.2-52.0 g/m(2) and 3.96-4.07 g/m(2), respectively). Overall, comparable pollutant removal rates could be achieved in large-scale HSSF CWs with either gravel or slag as substrate and provide a possible solution for polluted urban river remediation in northern China.

  13. The large-scale cross-correlation of Damped Lyman alpha systems with the Lyman alpha forest: first measurements from BOSS

    SciTech Connect

    Font-Ribera, Andreu; Miralda-Escudé, Jordi; Arnau, Eduard; Carithers, Bill; Ross, Nicholas P.; White, Martin; Lee, Khee-Gan; Noterdaeme, Pasquier; Pâris, Isabelle; Petitjean, Patrick; Rollinde, Emmanuel; Rich, James; Schneider, Donald P.; York, Donald G. E-mail: miralda@icc.ub.edu

    2012-11-01

    We present the first measurement of the large-scale cross-correlation of Lyα forest absorption and Damped Lyman α systems (DLA), using the 9th Data Release of the Baryon Oscillation Spectroscopic Survey (BOSS). The cross-correlation is clearly detected on scales up to 40h{sup −1}Mpc and is well fitted by the linear theory prediction of the standard Cold Dark Matter model of structure formation with the expected redshift distortions, confirming its origin in the gravitational evolution of structure. The amplitude of the DLA-Lyα cross-correlation depends on only one free parameter, the bias factor of the DLA systems, once the Lyα forest bias factors are known from independent Lyα forest correlation measurements. We measure the DLA bias factor to be b{sub D} = (2.17±0.20)β{sub F}{sup 0.22}, where the Lyα forest redshift distortion parameter β{sub F} is expected to be above unity. This bias factor implies a typical host halo mass for DLAs that is much larger than expected in present DLA models, and is reproduced if the DLA cross section scales with halo mass as M{sub h}{sup α}, with α = 1.1±0.1 for β{sub F} = 1. Matching the observed DLA bias factor and rate of incidence requires that atomic gas remains extended in massive halos over larger areas than predicted in present simulations of galaxy formation, with typical DLA proper sizes larger than 20 kpc in host halos of masses ∼ 10{sup 12}M{sub ☉}. We infer that typical galaxies at z ≅ 2 to 3 are surrounded by systems of atomic clouds that are much more extended than the luminous parts of galaxies and contain ∼ 10% of the baryons in the host halo.

  14. Large-scale brain functional modularity is reflected in slow electroencephalographic rhythms across the human non-rapid eye movement sleep cycle.

    PubMed

    Tagliazucchi, Enzo; von Wegner, Frederic; Morzelewski, Astrid; Brodbeck, Verena; Borisov, Sergey; Jahnke, Kolja; Laufs, Helmut

    2013-04-15

    Large-scale brain functional networks (measured with functional magnetic resonance imaging, fMRI) are organized into separated but interacting modules, an architecture supporting the integration of distinct dynamical processes. In this work we study how the aforementioned modular architecture changes with the progressive loss of vigilance occurring in the descent to deep sleep and we examine the relationship between the ensuing slow electroencephalographic rhythms and large-scale network modularity as measured with fMRI. Graph theoretical methods are used to analyze functional connectivity graphs obtained from fifty-five participants at wakefulness, light and deep sleep. Network modularity (a measure of functional segregation) was found to increase during deeper sleep stages but not in light sleep. By endowing functional networks with dynamical properties, we found a direct link between increased electroencephalographic (EEG) delta power (1-4 Hz) and a breakdown of inter-modular connectivity. Both EEG slowing and increased network modularity were found to quickly decrease during awakenings from deep sleep to wakefulness, in a highly coordinated fashion. Studying the modular structure itself by means of a permutation test, we revealed different module memberships when deep sleep was compared to wakefulness. Analysis of node roles in the modular structure revealed an increase in the number of locally well-connected nodes and a decrease in the number of globally well-connected hubs, which hinders interactions between separated functional modules. Our results reveal a well-defined sequence of changes in brain modular organization occurring during the descent to sleep and establish a close parallel between modularity alterations in large-scale functional networks (accessible through whole brain fMRI recordings) and the slowing of scalp oscillations (visible on EEG). The observed re-arrangement of connectivity might play an important role in the processes underlying loss

  15. Correlation of generation interval and scale of large-scale submarine landslides using 3D seismic data off Shimokita Peninsula, Northeast Japan

    NASA Astrophysics Data System (ADS)

    Nakamura, Yuki; Ashi, Juichiro; Morita, Sumito

    2016-04-01

    To clarify timing and scale of past submarine landslides is important to understand formation processes of the landslides. The study area is in a part of continental slope of the Japan Trench, where a number of large-scale submarine landslide (slump) deposits have been identified in Pliocene and Quaternary formations by analysing METI's 3D seismic data "Sanrikuoki 3D" off Shimokita Peninsula (Morita et al., 2011). As structural features, swarm of parallel dikes which are likely dewatering paths formed accompanying the slumping deformation, and slip directions are basically perpendicular to the parallel dikes. Therefore, parallel dikes are good indicator for estimation of slip directions. Slip direction of each slide was determined one kilometre grid in the survey area of 40 km x 20 km. The remarkable slip direction varies from Pliocene to Quaternary in the survey area. Parallel dike structure is also available for the distinguishment of the slump deposit and normal deposit on time slice images. By tracing outline of slump deposits at each depth, we identified general morphology of the overall slump deposits, and calculated the volume of the extracted slump deposits so as to estimate the scale of each event. We investigated temporal and spatial variation of depositional pattern of the slump deposits. Calculating the generation interval of the slumps, some periodicity is likely recognized, especially large slump do not occur in succession. Additionally, examining the relationship of the cumulative volume and the generation interval, certain correlation is observed in Pliocene and Quaternary. Key words: submarine landslides, 3D seismic data, Shimokita Peninsula

  16. Genome-wide association and large scale follow-up identifies 16 new loci influencing lung function

    PubMed Central

    Artigas, María Soler; Loth, Daan W; Wain, Louise V; Gharib, Sina A; Obeidat, Ma’en; Tang, Wenbo; Zhai, Guangju; Zhao, Jing Hua; Smith, Albert Vernon; Huffman, Jennifer E; Albrecht, Eva; Jackson, Catherine M; Evans, David M; Cadby, Gemma; Fornage, Myriam; Manichaikul, Ani; Lopez, Lorna M; Johnson, Toby; Aldrich, Melinda C; Aspelund, Thor; Barroso, Inês; Campbell, Harry; Cassano, Patricia A; Couper, David J; Eiriksdottir, Gudny; Franceschini, Nora; Garcia, Melissa; Gieger, Christian; Gislason, Gauti Kjartan; Grkovic, Ivica; Hammond, Christopher J; Hancock, Dana B; Harris, Tamara B; Ramasamy, Adaikalavan; Heckbert, Susan R; Heliövaara, Markku; Homuth, Georg; Hysi, Pirro G; James, Alan L; Jankovic, Stipan; Joubert, Bonnie R; Karrasch, Stefan; Klopp, Norman; Koch, Beate; Kritchevsky, Stephen B; Launer, Lenore J; Liu, Yongmei; Loehr, Laura R; Lohman, Kurt; Loos, Ruth JF; Lumley, Thomas; Al Balushi, Khalid A; Ang, Wei Q; Barr, R Graham; Beilby, John; Blakey, John D; Boban, Mladen; Boraska, Vesna; Brisman, Jonas; Britton, John R; Brusselle, Guy G; Cooper, Cyrus; Curjuric, Ivan; Dahgam, Santosh; Deary, Ian J; Ebrahim, Shah; Eijgelsheim, Mark; Francks, Clyde; Gaysina, Darya; Granell, Raquel; Gu, Xiangjun; Hankinson, John L; Hardy, Rebecca; Harris, Sarah E; Henderson, John; Henry, Amanda; Hingorani, Aroon D; Hofman, Albert; Holt, Patrick G; Hui, Jennie; Hunter, Michael L; Imboden, Medea; Jameson, Karen A; Kerr, Shona M; Kolcic, Ivana; Kronenberg, Florian; Liu, Jason Z; Marchini, Jonathan; McKeever, Tricia; Morris, Andrew D; Olin, Anna-Carin; Porteous, David J; Postma, Dirkje S; Rich, Stephen S; Ring, Susan M; Rivadeneira, Fernando; Rochat, Thierry; Sayer, Avan Aihie; Sayers, Ian; Sly, Peter D; Smith, George Davey; Sood, Akshay; Starr, John M; Uitterlinden, André G; Vonk, Judith M; Wannamethee, S Goya; Whincup, Peter H; Wijmenga, Cisca; Williams, O Dale; Wong, Andrew; Mangino, Massimo; Marciante, Kristin D; McArdle, Wendy L; Meibohm, Bernd; Morrison, Alanna C; North, Kari E; Omenaas, Ernst; Palmer, Lyle J; Pietiläinen, Kirsi H; Pin, Isabelle; Polašek, Ozren; Pouta, Anneli; Psaty, Bruce M; Hartikainen, Anna-Liisa; Rantanen, Taina; Ripatti, Samuli; Rotter, Jerome I; Rudan, Igor; Rudnicka, Alicja R; Schulz, Holger; Shin, So-Youn; Spector, Tim D; Surakka, Ida; Vitart, Veronique; Völzke, Henry; Wareham, Nicholas J; Warrington, Nicole M; Wichmann, H-Erich; Wild, Sarah H; Wilk, Jemma B; Wjst, Matthias; Wright, Alan F; Zgaga, Lina; Zemunik, Tatijana; Pennell, Craig E; Nyberg, Fredrik; Kuh, Diana; Holloway, John W; Boezen, H Marike; Lawlor, Debbie A; Morris, Richard W; Probst-Hensch, Nicole; Kaprio, Jaakko; Wilson, James F; Hayward, Caroline; Kähönen, Mika; Heinrich, Joachim; Musk, Arthur W; Jarvis, Deborah L; Gläser, Sven; Järvelin, Marjo-Riitta; Stricker, Bruno H Ch; Elliott, Paul; O’Connor, George T; Strachan, David P; London, Stephanie J; Hall, Ian P; Gudnason, Vilmundur; Tobin, Martin D

    2011-01-01

    Pulmonary function measures reflect respiratory health and predict mortality, and are used in the diagnosis of chronic obstructive pulmonary disease (COPD). We tested genome-wide association with the forced expiratory volume in 1 second (FEV1) and the ratio of FEV1 to forced vital capacity (FVC) in 48,201 individuals of European ancestry, with follow-up of top associations in up to an additional 46,411 individuals. We identified new regions showing association (combined P<5×10−8) with pulmonary function, in or near MFAP2, TGFB2, HDAC4, RARB, MECOM (EVI1), SPATA9, ARMC2, NCR3, ZKSCAN3, CDC123, C10orf11, LRP1, CCDC38, MMP15, CFDP1, and KCNE2. Identification of these 16 new loci may provide insight into the molecular mechanisms regulating pulmonary function and into molecular targets for future therapy to alleviate reduced lung function. PMID:21946350

  17. Large-scale determination of sequence, structure, and function relationships in cytosolic glutathione transferases across the biosphere.

    PubMed

    Mashiyama, Susan T; Malabanan, M Merced; Akiva, Eyal; Bhosle, Rahul; Branch, Megan C; Hillerich, Brandan; Jagessar, Kevin; Kim, Jungwook; Patskovsky, Yury; Seidel, Ronald D; Stead, Mark; Toro, Rafael; Vetting, Matthew W; Almo, Steven C; Armstrong, Richard N; Babbitt, Patricia C

    2014-04-01

    The cytosolic glutathione transferase (cytGST) superfamily comprises more than 13,000 nonredundant sequences found throughout the biosphere. Their key roles in metabolism and defense against oxidative damage have led to thousands of studies over several decades. Despite this attention, little is known about the physiological reactions they catalyze and most of the substrates used to assay cytGSTs are synthetic compounds. A deeper understanding of relationships across the superfamily could provide new clues about their functions. To establish a foundation for expanded classification of cytGSTs, we generated similarity-based subgroupings for the entire superfamily. Using the resulting sequence similarity networks, we chose targets that broadly covered unknown functions and report here experimental results confirming GST-like activity for 82 of them, along with 37 new 3D structures determined for 27 targets. These new data, along with experimentally known GST reactions and structures reported in the literature, were painted onto the networks to generate a global view of their sequence-structure-function relationships. The results show how proteins of both known and unknown function relate to each other across the entire superfamily and reveal that the great majority of cytGSTs have not been experimentally characterized or annotated by canonical class. A mapping of taxonomic classes across the superfamily indicates that many taxa are represented in each subgroup and highlights challenges for classification of superfamily sequences into functionally relevant classes. Experimental determination of disulfide bond reductase activity in many diverse subgroups illustrate a theme common for many reaction types. Finally, sequence comparison between an enzyme that catalyzes a reductive dechlorination reaction relevant to bioremediation efforts with some of its closest homologs reveals differences among them likely to be associated with evolution of this unusual reaction

  18. Dynamic multi-swarm particle swarm optimizer using parallel PC cluster systems for global optimization of large-scale multimodal functions

    NASA Astrophysics Data System (ADS)

    Fan, Shu-Kai S.; Chang, Ju-Ming

    2010-05-01

    This article presents a novel parallel multi-swarm optimization (PMSO) algorithm with the aim of enhancing the search ability of standard single-swarm PSOs for global optimization of very large-scale multimodal functions. Different from the existing multi-swarm structures, the multiple swarms work in parallel, and the search space is partitioned evenly and dynamically assigned in a weighted manner via the roulette wheel selection (RWS) mechanism. This parallel, distributed framework of the PMSO algorithm is developed based on a master-slave paradigm, which is implemented on a cluster of PCs using message passing interface (MPI) for information interchange among swarms. The PMSO algorithm handles multiple swarms simultaneously and each swarm performs PSO operations of its own independently. In particular, one swarm is designated for global search and the others are for local search. The first part of the experimental comparison is made among the PMSO, standard PSO, and two state-of-the-art algorithms (CTSS and CLPSO) in terms of various un-rotated and rotated benchmark functions taken from the literature. In the second part, the proposed multi-swarm algorithm is tested on large-scale multimodal benchmark functions up to 300 dimensions. The results of the PMSO algorithm show great promise in solving high-dimensional problems.

  19. Large-scale real-space density-functional calculations: Moiré-induced electron localization in graphene

    SciTech Connect

    Oshiyama, Atsushi Iwata, Jun-Ichi; Uchida, Kazuyuki; Matsushita, Yu-Ichiro

    2015-03-21

    We show that our real-space finite-difference scheme allows us to perform density-functional calculations for nanometer-scale targets containing more than 100 000 atoms. This real-space scheme is applied to twisted bilayer graphene, clarifying that Moiré pattern induced in the slightly twisted bilayer graphene drastically modifies the atomic and electronic structures.

  20. Large-Scale Variation in Combined Impacts of Canopy Loss and Disturbance on Community Structure and Ecosystem Functioning

    PubMed Central

    Crowe, Tasman P.; Cusson, Mathieu; Bulleri, Fabio; Davoult, Dominique; Arenas, Francisco; Aspden, Rebecca; Benedetti-Cecchi, Lisandro; Bevilacqua, Stanislao; Davidson, Irvine; Defew, Emma; Fraschetti, Simonetta; Golléty, Claire; Griffin, John N.; Herkül, Kristjan; Kotta, Jonne; Migné, Aline; Molis, Markus; Nicol, Sophie K.; Noël, Laure M-L J.; Pinto, Isabel Sousa; Valdivia, Nelson; Vaselli, Stefano; Jenkins, Stuart R.

    2013-01-01

    Ecosystems are under pressure from multiple human disturbances whose impact may vary depending on environmental context. We experimentally evaluated variation in the separate and combined effects of the loss of a key functional group (canopy algae) and physical disturbance on rocky shore ecosystems at nine locations across Europe. Multivariate community structure was initially affected (during the first three to six months) at six locations but after 18 months, effects were apparent at only three. Loss of canopy caused increases in cover of non-canopy algae in the three locations in southern Europe and decreases in some northern locations. Measures of ecosystem functioning (community respiration, gross primary productivity, net primary productivity) were affected by loss of canopy at five of the six locations for which data were available. Short-term effects on community respiration were widespread, but effects were rare after 18 months. Functional changes corresponded with changes in community structure and/or species richness at most locations and times sampled, but no single aspect of biodiversity was an effective predictor of longer-term functional changes. Most ecosystems studied were able to compensate in functional terms for impacts caused by indiscriminate physical disturbance. The only consistent effect of disturbance was to increase cover of non-canopy species. Loss of canopy algae temporarily reduced community resistance to disturbance at only two locations and at two locations actually increased resistance. Resistance to disturbance-induced changes in gross primary productivity was reduced by loss of canopy algae at four locations. Location-specific variation in the effects of the same stressors argues for flexible frameworks for the management of marine environments. These results also highlight the need to analyse how species loss and other stressors combine and interact in different environmental contexts. PMID:23799082

  1. Differential Item Functioning by Gender on a Large-Scale Science Performance Assessment: A Comparison across Grade Levels.

    ERIC Educational Resources Information Center

    Holweger, Nancy; Taylor, Grace

    The fifth-grade and eighth-grade science items on a state performance assessment were compared for differential item functioning (DIF) due to gender. The grade 5 sample consisted of 8,539 females and 8,029 males and the grade 8 sample consisted of 7,477 females and 7,891 males. A total of 30 fifth grade items and 26 eighth grade items were…

  2. Large-Scale, High-Resolution Multielectrode-Array Recording Depicts Functional Network Differences of Cortical and Hippocampal Cultures

    PubMed Central

    Ito, Shinya; Yeh, Fang-Chin; Hiolski, Emma; Rydygier, Przemyslaw; Gunning, Deborah E.; Hottowy, Pawel; Timme, Nicholas; Litke, Alan M.; Beggs, John M.

    2014-01-01

    Understanding the detailed circuitry of functioning neuronal networks is one of the major goals of neuroscience. Recent improvements in neuronal recording techniques have made it possible to record the spiking activity from hundreds of neurons simultaneously with sub-millisecond temporal resolution. Here we used a 512-channel multielectrode array system to record the activity from hundreds of neurons in organotypic cultures of cortico-hippocampal brain slices from mice. To probe the network structure, we employed a wavelet transform of the cross-correlogram to categorize the functional connectivity in different frequency ranges. With this method we directly compare, for the first time, in any preparation, the neuronal network structures of cortex and hippocampus, on the scale of hundreds of neurons, with sub-millisecond time resolution. Among the three frequency ranges that we investigated, the lower two frequency ranges (gamma (30–80 Hz) and beta (12–30 Hz) range) showed similar network structure between cortex and hippocampus, but there were many significant differences between these structures in the high frequency range (100–1000 Hz). The high frequency networks in cortex showed short tailed degree-distributions, shorter decay length of connectivity density, smaller clustering coefficients, and positive assortativity. Our results suggest that our method can characterize frequency dependent differences of network architecture from different brain regions. Crucially, because these differences between brain regions require millisecond temporal scales to be observed and characterized, these results underscore the importance of high temporal resolution recordings for the understanding of functional networks in neuronal systems. PMID:25126851

  3. Changes in large-scale chromatin structure and function during oogenesis: a journey in company with follicular cells.

    PubMed

    Luciano, Alberto M; Franciosi, Federica; Dieci, Cecilia; Lodde, Valentina

    2014-09-01

    The mammalian oocyte nucleus or germinal vesicle (GV) exhibits characteristic chromatin configurations, which are subject to dynamic modifications through oogenesis. Aim of this review is to highlight how changes in chromatin configurations are related to both functional and structural modifications occurring in the oocyte nuclear and cytoplasmic compartments. During the long phase of meiotic arrest at the diplotene stage, the chromatin enclosed within the GV is subjected to several levels of regulation. Morphologically, the chromosomes lose their individuality and form a loose chromatin mass. The decondensed configuration of chromatin then undergoes profound rearrangements during the final stages of oocyte growth that are tightly associated with the acquisition of meiotic and developmental competence. Functionally, the discrete stages of chromatin condensation are characterized by different level of transcriptional activity, DNA methylation and covalent histone modifications. Interestingly, the program of chromatin rearrangement is not completely intrinsic to the oocyte, but follicular cells exert their regulatory actions through gap junction mediated communications and intracellular messenger dependent mechanism(s). With this in mind and since oocyte growth mostly relies on the bidirectional interaction with the follicular cells, a connection between cumulus cells gene expression profile and oocyte developmental competence, according to chromatin configuration is proposed. This analysis can help in identifying candidate genes involved in the process of oocyte developmental competence acquisition and in providing non-invasive biomarkers of oocyte health status that can have important implications in treating human infertility as well as managing breeding schemes in domestic mammals.

  4. Large-Scale Prospective T Cell Function Assays in Shipped, Unfrozen Blood Samples: Experiences from the Multicenter TRIGR Trial

    PubMed Central

    Cheung, Roy K.; Becker, Dorothy J.; Girgis, Rose; Palmer, Jerry P.; Cuthbertson, David; Krischer, Jeffrey P.

    2014-01-01

    Broad consensus assigns T lymphocytes fundamental roles in inflammatory, infectious, and autoimmune diseases. However, clinical investigations have lacked fully characterized and validated procedures, equivalent to those of widely practiced biochemical tests with established clinical roles, for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities of the samples and the unstimulated cell counts in the viable samples. Also, subject age was significantly associated with the number of unstimulated cells and T cell proliferation to positive activators. Finally, we observed a pattern of statistically significant increases in T cell responses to tetanus toxin around the timing of infant vaccinations. This assay platform and shipping protocol satisfy the criteria for robust and reproducible long-term measurements of human T cell function, comparable to those of established blood biochemical tests. We present a stable technology for prospective disease-relevant T cell analysis in immunological diseases, vaccination medicine, and measurement of herd immunity. PMID:24334687

  5. Identification of Genes Important for Cutaneous Function Revealed by a Large Scale Reverse Genetic Screen in the Mouse

    PubMed Central

    DiTommaso, Tia; Jones, Lynelle K.; Cottle, Denny L.; Gerdin, Anna-Karin; Vancollie, Valerie E.; Watt, Fiona M.; Ramirez-Solis, Ramiro; Bradley, Allan; Steel, Karen P.; Sundberg, John P.; White, Jacqueline K.; Smyth, Ian M.

    2014-01-01

    The skin is a highly regenerative organ which plays critical roles in protecting the body and sensing its environment. Consequently, morbidity and mortality associated with skin defects represent a significant health issue. To identify genes important in skin development and homeostasis, we have applied a high throughput, multi-parameter phenotype screen to the conditional targeted mutant mice generated by the Wellcome Trust Sanger Institute's Mouse Genetics Project (Sanger-MGP). A total of 562 different mouse lines were subjected to a variety of tests assessing cutaneous expression, macroscopic clinical disease, histological change, hair follicle cycling, and aberrant marker expression. Cutaneous lesions were associated with mutations in 23 different genes. Many of these were not previously associated with skin disease in the organ (Mysm1, Vangl1, Trpc4ap, Nom1, Sparc, Farp2, and Prkab1), while others were ascribed new cutaneous functions on the basis of the screening approach (Krt76, Lrig1, Myo5a, Nsun2, and Nf1). The integration of these skin specific screening protocols into the Sanger-MGP primary phenotyping pipelines marks the largest reported reverse genetic screen undertaken in any organ and defines approaches to maximise the productivity of future projects of this nature, while flagging genes for further characterisation. PMID:25340873

  6. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  7. A systematic exploration of high-temperature stress-responsive genes in potato using large-scale yeast functional screening.

    PubMed

    Gangadhar, Baniekal Hiremath; Yu, Jae Woong; Sajeesh, Kappachery; Park, Se Won

    2014-04-01

    Potato (S. tuberosum) is a highly heat-sensitive crop; a slight rise from optimal temperature can lead to drastic decline in tuber yield. Despite several advancements made in breeding for thermo-tolerant potato, molecular mechanisms governing thermo-tolerance is poorly understood. The first step towards understanding the thermo-tolerance mechanism is to identify the key genes involved in it. Here we used a yeast-based functional screening method to identify, characterize and classify potato genes with potentials to impart heat tolerance. We constructed two cDNA expression libraries from heat-stressed potato plants (35 °C) after 2 and 48 h of treatment. 95 potential candidate genes were identified based on enhanced ability of yeast cells over-expressing heterologous potato cDNA sequences to tolerate heat stress. Cross-resistance analysis of these heat-tolerant yeast clones to other abiotic stresses indicated that 20 genes were responsive to drought, 14 to salt and 11 to heat/drought/salt stresses. Comparison of 95 genes with reported whole potato transcriptome data showed that majority of them have varying expression patterns under heat, drought and salt stresses. The expression pattern was validated by analyzing the expression of 22 randomly selected genes under various stresses using qPCR. Gene ontology (GO) enrichment analysis of these 95 genes indicated that most of them are involved in various cellular metabolism, signal transduction, response to stress and protein folding, suggesting possible role of these genes in heat tolerance of potato. Genes identified from this study can be potential candidates for engineering heat tolerance as well as broad-spectrum abiotic stress tolerance of potato. PMID:24357347

  8. Large-scale Gene Knockdown in C. elegans Using dsRNA Feeding Libraries to Generate Robust Loss-of-function Phenotypes

    PubMed Central

    Maher, Kathryn N.; Catanese, Mary; Chase, Daniel L.

    2013-01-01

    RNA interference by feeding worms bacteria expressing dsRNAs has been a useful tool to assess gene function in C. elegans. While this strategy works well when a small number of genes are targeted for knockdown, large scale feeding screens show variable knockdown efficiencies, which limits their utility. We have deconstructed previously published RNAi knockdown protocols and found that the primary source of the reduced knockdown can be attributed to the loss of dsRNA-encoding plasmids from the bacteria fed to the animals. Based on these observations, we have developed a dsRNA feeding protocol that greatly reduces or eliminates plasmid loss to achieve efficient, high throughput knockdown. We demonstrate that this protocol will produce robust, reproducible knock down of C. elegans genes in multiple tissue types, including neurons, and will permit efficient knockdown in large scale screens. This protocol uses a commercially available dsRNA feeding library and describes all steps needed to duplicate the library and perform dsRNA screens. The protocol does not require the use of any sophisticated equipment, and can therefore be performed by any C. elegans lab. PMID:24121477

  9. Efficient large-scale generation of functional hepatocytes from mouse embryonic stem cells grown in a rotating bioreactor with exogenous growth factors and hormones

    PubMed Central

    2013-01-01

    Introduction Embryonic stem (ES) cells are considered a potentially advantageous source of hepatocytes for both transplantation and the development of bioartificial livers. However, the efficient large-scale generation of functional hepatocytes from ES cells remains a major challenge, especially for those methods compatible with clinical applications. Methods In this study, we investigated whether a large number of functional hepatocytes can be differentiated from mouse ES (mES) cells using a simulated microgravity bioreactor. mES cells were cultured in a rotating bioreactor in the presence of exogenous growth factors and hormones to form embryoid bodies (EBs), which then differentiated into hepatocytes. Results During the rotating culture, most of the EB-derived cells gradually showed the histologic characteristics of normal hepatocytes. More specifically, the expression of hepatic genes and proteins was detected at a higher level in the differentiated cells from the bioreactor culture than in cells from a static culture. On further growing, the EBs on tissue-culture plates, most of the EB-derived cells were found to display the morphologic features of hepatocytes, as well as albumin synthesis. In addition, the EB-derived cells grown in the rotating bioreactor exhibited higher levels of liver-specific functions, such as glycogen storage, cytochrome P450 activity, low-density lipoprotein, and indocyanine green uptake, than did differentiated cells grown in static culture. When the EB-derived cells from day-14 EBs and the cells’ culture supernatant were injected into nude mice, the transplanted cells were engrafted into the recipient livers. Conclusions Large quantities of high-quality hepatocytes can be generated from mES cells in a rotating bioreactor via EB formation. This system may be useful in the large-scale generation of hepatocytes for both cell transplantation and the development of bioartificial livers. PMID:24294908

  10. A one-step approach to the large-scale synthesis of functionalized MoS2 nanosheets by ionic liquid assisted grinding

    NASA Astrophysics Data System (ADS)

    Zhang, Wentao; Wang, Yanru; Zhang, Daohong; Yu, Shaoxuan; Zhu, Wenxin; Wang, Jing; Zheng, Fangqing; Wang, Shuaixing; Wang, Jianlong

    2015-05-01

    A prerequisite for exploiting most proposed applications for MoS2 is the availability of water-dispersible functionalized MoS2 nanosheets in large quantities. Here we report one-step synthesis and surface functionalization of MoS2 nanosheets by a facile ionic liquid assisted grinding method in the presence of chitosan. The selected ionic liquid with suitable surface energy could efficiently overcome the van der Waals force between the MoS2 layers. Meanwhile, chitosan molecules bind to the plane of MoS2 sheets non-covalently, which prevents the reassembling of exfoliated MoS2 sheets and facilitates the exfoliation progress. The obtained chitosan functionalized MoS2 nanosheets possess favorable stability and biocompatibility, which renders them as promising and biocompatible near-infrared agents for photothermal ablation of cancer. This contribution provides a facile way for the green, one-step and large-scale synthesis of advanced functional MoS2 materials.A prerequisite for exploiting most proposed applications for MoS2 is the availability of water-dispersible functionalized MoS2 nanosheets in large quantities. Here we report one-step synthesis and surface functionalization of MoS2 nanosheets by a facile ionic liquid assisted grinding method in the presence of chitosan. The selected ionic liquid with suitable surface energy could efficiently overcome the van der Waals force between the MoS2 layers. Meanwhile, chitosan molecules bind to the plane of MoS2 sheets non-covalently, which prevents the reassembling of exfoliated MoS2 sheets and facilitates the exfoliation progress. The obtained chitosan functionalized MoS2 nanosheets possess favorable stability and biocompatibility, which renders them as promising and biocompatible near-infrared agents for photothermal ablation of cancer. This contribution provides a facile way for the green, one-step and large-scale synthesis of advanced functional MoS2 materials. Electronic supplementary information (ESI) available: The

  11. Lagrangian space consistency relation for large scale structure

    SciTech Connect

    Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lh399@columbia.edu

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  12. A one-step approach to the large-scale synthesis of functionalized MoS2 nanosheets by ionic liquid assisted grinding.

    PubMed

    Zhang, Wentao; Wang, Yanru; Zhang, Daohong; Yu, Shaoxuan; Zhu, Wenxin; Wang, Jing; Zheng, Fangqing; Wang, Shuaixing; Wang, Jianlong

    2015-06-14

    A prerequisite for exploiting most proposed applications for MoS2 is the availability of water-dispersible functionalized MoS2 nanosheets in large quantities. Here we report one-step synthesis and surface functionalization of MoS2 nanosheets by a facile ionic liquid assisted grinding method in the presence of chitosan. The selected ionic liquid with suitable surface energy could efficiently overcome the van der Waals force between the MoS2 layers. Meanwhile, chitosan molecules bind to the plane of MoS2 sheets non-covalently, which prevents the reassembling of exfoliated MoS2 sheets and facilitates the exfoliation progress. The obtained chitosan functionalized MoS2 nanosheets possess favorable stability and biocompatibility, which renders them as promising and biocompatible near-infrared agents for photothermal ablation of cancer. This contribution provides a facile way for the green, one-step and large-scale synthesis of advanced functional MoS2 materials. PMID:25990823

  13. Understanding the recurrent large-scale green tide in the Yellow Sea: temporal and spatial correlations between multiple geographical, aquacultural and biological factors.

    PubMed

    Liu, Feng; Pang, Shaojun; Chopin, Thierry; Gao, Suqin; Shan, Tifeng; Zhao, Xiaobo; Li, Jing

    2013-02-01

    The coast of Jiangsu Province in China - where Ulva prolifera has always been firstly spotted before developing into green tides - is uniquely characterized by a huge intertidal radial mudflat. Results showed that: (1) propagules of U. prolifera have been consistently present in seawater and sediments of this mudflat and varied with locations and seasons; (2) over 50,000 tons of fermented chicken manure have been applied annually from March to May in coastal animal aquaculture ponds and thereafter the waste water has been discharged into the radial mudflat intensifying eutrophication; and (3) free-floating U. prolifera could be stranded in any floating infrastructures in coastal waters including large scale Porphyra farming rafts. For a truly integrated management of the coastal zone, reduction in nutrient inputs, and control of the effluents of the coastal pond systems, are needed to control eutrophication and prevent green tides in the future.

  14. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  15. The LOSS OF APOMEIOSIS (LOA) locus in Hieracium praealtum can function independently of the associated large-scale repetitive chromosomal structure.

    PubMed

    Kotani, Yoshiko; Henderson, Steven T; Suzuki, Go; Johnson, Susan D; Okada, Takashi; Siddons, Hayley; Mukai, Yasuhiko; Koltunow, Anna M G

    2014-02-01

    Apomixis or asexual seed formation in Hieracium praealtum (Asteraceae) is controlled by two independent dominant loci. One of these, the LOSS OF APOMEIOSIS (LOA) locus, controls apomixis initiation, mitotic embryo sac formation (apospory) and suppression of the sexual pathway. The LOA locus is found near the end of a hemizygous chromosome surrounded by extensive repeats extending along the chromosome arm. Similar apomixis-carrying chromosome structures have been found in some apomictic grasses, suggesting that the extensive repetitive sequences may be functionally relevant to apomixis. Fluorescence in situ hybridization (FISH) was used to examine chromosomes of apomeiosis deletion mutants and rare recombinants in the critical LOA region arising from a cross between sexual Hieracium pilosella and apomictic H. praealtum. The combined analyses of aposporous and nonaposporous recombinant progeny and chromosomal karyotypes were used to determine that the functional LOA locus can be genetically separated from the very extensive repeat regions found on the LOA-carrying chromosome. The large-scale repetitive sequences associated with the LOA locus in H. praealtum are not essential for apospory or suppression of sexual megasporogenesis (female meiosis). PMID:24400904

  16. Large-scale insertional mutagenesis of Chlamydomonas supports phylogenomic functional prediction of photosynthetic genes and analysis of classical acetate-requiring mutants.

    PubMed

    Dent, Rachel M; Sharifi, Marina N; Malnoë, Alizée; Haglund, Cat; Calderon, Robert H; Wakao, Setsuko; Niyogi, Krishna K

    2015-04-01

    Chlamydomonas reinhardtii is a unicellular green alga that is a key model organism in the study of photosynthesis and oxidative stress. Here we describe the large-scale generation of a population of insertional mutants that have been screened for phenotypes related to photosynthesis and the isolation of 459 flanking sequence tags from 439 mutants. Recent phylogenomic analysis has identified a core set of genes, named GreenCut2, that are conserved in green algae and plants. Many of these genes are likely to be central to the process of photosynthesis, and they are over-represented by sixfold among the screened insertional mutants, with insertion events isolated in or adjacent to 68 of 597 GreenCut2 genes. This enrichment thus provides experimental support for functional assignments based on previous bioinformatic analysis. To illustrate one of the uses of the population, a candidate gene approach based on genome position of the flanking sequence of the insertional mutant CAL027_01_20 was used to identify the molecular basis of the classical C. reinhardtii mutation ac17. These mutations were shown to affect the gene PDH2, which encodes a subunit of the plastid pyruvate dehydrogenase complex. The mutants and associated flanking sequence data described here are publicly available to the research community, and they represent one of the largest phenotyped collections of algal insertional mutants to date.

  17. Large-scale insertional mutagenesis of Chlamydomonas supports phylogenomic functional prediction of photosynthetic genes and analysis of classical acetate-requiring mutants.

    PubMed

    Dent, Rachel M; Sharifi, Marina N; Malnoë, Alizée; Haglund, Cat; Calderon, Robert H; Wakao, Setsuko; Niyogi, Krishna K

    2015-04-01

    Chlamydomonas reinhardtii is a unicellular green alga that is a key model organism in the study of photosynthesis and oxidative stress. Here we describe the large-scale generation of a population of insertional mutants that have been screened for phenotypes related to photosynthesis and the isolation of 459 flanking sequence tags from 439 mutants. Recent phylogenomic analysis has identified a core set of genes, named GreenCut2, that are conserved in green algae and plants. Many of these genes are likely to be central to the process of photosynthesis, and they are over-represented by sixfold among the screened insertional mutants, with insertion events isolated in or adjacent to 68 of 597 GreenCut2 genes. This enrichment thus provides experimental support for functional assignments based on previous bioinformatic analysis. To illustrate one of the uses of the population, a candidate gene approach based on genome position of the flanking sequence of the insertional mutant CAL027_01_20 was used to identify the molecular basis of the classical C. reinhardtii mutation ac17. These mutations were shown to affect the gene PDH2, which encodes a subunit of the plastid pyruvate dehydrogenase complex. The mutants and associated flanking sequence data described here are publicly available to the research community, and they represent one of the largest phenotyped collections of algal insertional mutants to date. PMID:25711437

  18. Multiple soft limits of cosmological correlation functions

    SciTech Connect

    Joyce, Austin; Khoury, Justin; Simonović, Marko E-mail: jkhoury@sas.upenn.edu

    2015-01-01

    We derive novel identities satisfied by inflationary correlation functions in the limit where two external momenta are taken to be small. We derive these statements in two ways: using background-wave arguments and as Ward identities following from the fixed-time path integral. Interestingly, these identities allow us to constrain some of the O(q{sup 2}) components of the soft limit, in contrast to their single-soft analogues. We provide several nontrivial checks of our identities both in the context of resonant non-Gaussianities and in small sound speed models. Additionally, we extend the relation at lowest order in external momenta to arbitrarily many soft legs, and comment on the many-soft extension at higher orders in the soft momentum. Finally, we consider how higher soft limits lead to identities satisfied by correlation functions in large-scale structure.

  19. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  20. Investigating the correlation between white matter and microvasculature changes in aging using large scale optical coherence tomography and confocal fluorescence imaging combined with tissue sectioning

    NASA Astrophysics Data System (ADS)

    Castonguay, Alexandre; Avti, Pramod K.; Moeini, Mohammad; Pouliot, Philippe; Tabatabaei, Maryam S.; Bélanger, Samuel; Lesage, Frédéric

    2015-03-01

    Here, we present a serial OCT/confocal scanner for histological study of the mouse brain. Three axis linear stages combined with a sectioning vibratome allows to cut thru the entire biological tissue and to image every section at a microscopic resolution. After acquisition, each OCT volume and confocal image is re-stitched with adjacent acquisitions to obtain a reconstructed, digital volume of the imaged tissue. This imaging platform was used to investigate correlations between white matter and microvasculature changes in aging mice. Three age groups were used in this study (4, 12, 24 months). At sacrifice, mice were transcardially perfused with a FITC containing gel. The dual imaging capability of the system allowed to reveal different contrast information: OCT imaging reveals changes in refractive indices giving contrast between white and grey matter in the mouse brain, while transcardial perfusion of a FITC shows microsvasculature in the brain with confocal imaging.

  1. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  2. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  3. Functional clustering and lineage markers: insights into cellular differentiation and gene function from large-scale microarray studies of purified primary cell populations.

    PubMed

    Hume, David A; Summers, Kim M; Raza, Sobia; Baillie, J Kenneth; Freeman, Thomas C

    2010-06-01

    Very large microarray datasets showing gene expression across multiple tissues and cell populations provide a window on the transcriptional networks that underpin the differences in functional activity between biological systems. Clusters of co-expressed genes provide lineage markers, candidate regulators of cell function and, by applying the principle of guilt by association, candidate functions for genes of currently unknown function. We have analysed a dataset comprising pure cell populations from hemopoietic and non-hemopoietic cell types (http://biogps.gnf.org). Using a novel network visualisation and clustering approach, we demonstrate that it is possible to identify very tight expression signatures associated specifically with embryonic stem cells, mesenchymal cells and hematopoietic lineages. Selected examples validate the prediction that gene function can be inferred by co-expression. One expression cluster was enriched in phagocytes, which, alongside endosome-lysosome constituents, contains genes that may make up a 'pathway' for phagocyte differentiation. Promoters of these genes are enriched for binding sites for the ETS/PU.1 and MITF families. Another cluster was associated with the production of a specific extracellular matrix, with high levels of gene expression shared by cells of mesenchymal origin (fibroblasts, adipocytes, osteoblasts and myoblasts). We discuss the limitations placed upon such data by the presence of alternative promoters with distinct tissue specificity within many protein-coding genes.

  4. Pd(0)-Catalyzed Direct C-H Functionalization of 2-H-4-Benzylidene Imidazolones: Friendly and Large-Scale Access to GFP and Kaede Protein Fluorophores.

    PubMed

    Muselli, Mickaël; Baudequin, Christine; Perrio, Cécile; Hoarau, Christophe; Bischoff, Laurent

    2016-04-11

    The first one-pot synthesis of N-substituted 2-H-4-benzylidene imidazolones and their subsequent palladium-catalyzed and copper-assisted direct C2-H arylation and alkenylation with aryl- and alkenylhalides are described. This innovative synthesis is step-economical, azide-free, high yielding, highly flexible in the introduction of a variety of electronically different groups, and can be operated on large-scale. Moreover, the method allows direct access to C2-arylated or alkenylated imidazolone-based green fluorescent protein (GFP) and Kaede protein fluorophores, including ortho-hydroxylated models. PMID:26960963

  5. Is Traumatic Brain Injury Associated with Reduced Inter-Hemispheric Functional Connectivity? A Study of Large-Scale Resting State Networks following Traumatic Brain Injury.

    PubMed

    Rigon, Arianna; Duff, Melissa C; McAuley, Edward; Kramer, Arthur F; Voss, Michelle W

    2016-06-01

    Traumatic brain injury (TBI) often has long-term debilitating sequelae in cognitive and behavioral domains. Understanding how TBI impacts functional integrity of brain networks that underlie these domains is key to guiding future approaches to TBI rehabilitation. In the current study, we investigated the differences in inter-hemispheric functional connectivity (FC) of resting state networks (RSNs) between chronic mild-to-severe TBI patients and normal comparisons (NC), focusing on two externally oriented networks (i.e., the fronto-parietal network [FPN] and the executive control network [ECN]), one internally oriented network (i.e., the default mode network [DMN]), and one somato-motor network (SMN). Seed voxel correlation analysis revealed that TBI patients displayed significantly less FC between lateralized seeds and both homologous and non-homologous regions in the opposite hemisphere for externally oriented networks but not for DMN or SMN; conversely, TBI patients showed increased FC within regions of the DMN, especially precuneus and parahippocampal gyrus. Region of interest correlation analyses confirmed the presence of significantly higher inter-hemispheric FC in NC for the FPN (p < 0.01), and ECN (p < 0.05), but not for the DMN (p > 0.05) or SMN (p > 0.05). Further analysis revealed that performance on a neuropsychological test measuring organizational skills and visuo-spatial abilities administered to the TBI group, the Rey-Osterrieth Complex Figure Test, positively correlated with FC between the right FPN and homologous regions. Our findings suggest that distinct RSNs display specific patterns of aberrant FC following TBI; this represents a step forward in the search for biomarkers useful for early diagnosis and treatment of TBI-related cognitive impairment.

  6. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  7. Locally Biased Galaxy Formation and Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Narayanan, Vijay K.; Berlind, Andreas A.; Weinberg, David H.

    2000-01-01

    We examine the influence of the morphology-density relation and a wide range of simple models for biased galaxy formation on statistical measures of large-scale structure. We contrast the behavior of local biasing models, in which the efficiency of galaxy formation is determined by the density, geometry, or velocity dispersion of the local mass distribution, with that of nonlocal biasing models, in which galaxy formation is modulated coherently over scales larger than the galaxy correlation length. If morphological segregation of galaxies is governed by a local morphology-density relation, then the correlation function of E/S0 galaxies should be steeper and stronger than that of spiral galaxies on small scales, as observed, while on large scales the E/S0 and spiral galaxies should have correlation functions with the same shape but different amplitudes. Similarly, all of our local bias models produce scale-independent amplification of the correlation function and power spectrum in the linear and mildly nonlinear regimes; only a nonlocal biasing mechanism can alter the shape of the power spectrum on large scales. Moments of the biased galaxy distribution retain the hierarchical pattern of the mass moments, but biasing alters the values and scale dependence of the hierarchical amplitudes S3 and S4. Pair-weighted moments of the galaxy velocity distribution are sensitive to the details of the bias prescription even if galaxies have the same local velocity distribution as the underlying dark matter. The nonlinearity of the relation between galaxy density and mass density depends on the biasing prescription and the smoothing scale, and the scatter in this relation is a useful diagnostic of the physical parameters that determine the bias. While the assumption that galaxy formation is governed by local physics leads to some important simplifications on large scales, even local biasing is a multifaceted phenomenon whose impact cannot be described by a single parameter or

  8. Regional and large-scale patterns in Amazon forest structure and function are mediated by variations in soil physical and chemical properties

    NASA Astrophysics Data System (ADS)

    A, Quesada, C.; Lloyd, J.

    2009-04-01

    Forest structure and dynamics have been noted to vary across the Amazon Basin in an east-west gradient in a pattern which coincides with variations in soil fertility and geology. This has resulted in the hypothesis that soil fertility may play an important role in explaining Basin-wide variations in forest biomass, growth and stem turnover rates. To test this hypothesis and assess the importance of edaphic properties in affect forest structure and dynamics, soil and plant samples were collected in a total of 59 different forest plots across the Amazon Basin. Samples were analysed for exchangeable cations, C, N, pH with various P fractions also determined. Physical properties were also examined and an index of soil physical quality developed. Overall, forest structure and dynamics were found to be strongly and quantitatively related to edaphic conditions. Tree turnover rates emerged to be mostly influenced by soil physical properties whereas forest growth rates were mainly related to a measure of available soil phosphorus, although also dependent on rainfall amount and distribution. On the other hand, large scale variations in forest biomass could not be explained by any of the edaphic properties measured, nor by variation in climate. A new hypothesis of self-maintaining forest dynamic feedback mechanisms initiated by edaphic conditions is proposed. It is further suggested that this is a major factor determining forest disturbance levels, species composition and forest productivity on a Basin wide scale.

  9. Large-scale analysis reveals a functional single-nucleotide polymorphism in the 5′-flanking region of PRDM16 gene associated with lean body mass

    PubMed Central

    Urano, Tomohiko; Shiraki, Masataka; Sasaki, Noriko; Ouchi, Yasuyoshi; Inoue, Satoshi

    2014-01-01

    Genetic factors are important for the development of sarcopenia, a geriatric disorder characterized by low lean body mass. The aim of this study was to search for novel genes that regulate lean body mass in humans. We performed a large-scale search for 250K single-nucleotide polymorphisms (SNPs) associated with bone mineral density (BMD) using SNP arrays in 1081 Japanese postmenopausal women. We focused on an SNP (rs12409277) located in the 5′-flanking region of the PRDM16 (PRD1-BF-1-RIZ1 homologous domain containing protein 16) gene that showed a significant P value in our screening. We demonstrated that PRDM16 gene polymorphisms were significantly associated with total body BMD in 1081 postmenopausal Japanese women. The rs12409277 SNP affected the transcriptional activity of PRDM16. The subjects with one or two minor allele(s) had a higher lean body mass than the subjects with two major alleles. Genetic analyses uncovered the importance of the PRDM16 gene in the regulation of lean body mass. PMID:24863034

  10. Making Large-Scale Networks from fMRI Data

    PubMed Central

    Schmittmann, Verena D.; Jahfari, Sara; Borsboom, Denny; Savi, Alexander O.; Waldorp, Lourens J.

    2015-01-01

    Pairwise correlations are currently a popular way to estimate a large-scale network (> 1000 nodes) from functional magnetic resonance imaging data. However, this approach generally results in a poor representation of the true underlying network. The reason is that pairwise correlations cannot distinguish between direct and indirect connectivity. As a result, pairwise correlation networks can lead to fallacious conclusions; for example, one may conclude that a network is a small-world when it is not. In a simulation study and an application to resting-state fMRI data, we compare the performance of pairwise correlations in large-scale networks (2000 nodes) against three other methods that are designed to filter out indirect connections. Recovery methods are evaluated in four simulated network topologies (small world or not, scale-free or not) in scenarios where the number of observations is very small compared to the number of nodes. Simulations clearly show that pairwise correlation networks are fragmented into separate unconnected components with excessive connectedness within components. This often leads to erroneous estimates of network metrics, like small-world structures or low betweenness centrality, and produces too many low-degree nodes. We conclude that using partial correlations, informed by a sparseness penalty, results in more accurate networks and corresponding metrics than pairwise correlation networks. However, even with these methods, the presence of hubs in the generating network can be problematic if the number of observations is too small. Additionally, we show for resting-state fMRI that partial correlations are more robust than correlations to different parcellation sets and to different lengths of time-series. PMID:26325185

  11. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  12. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  13. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  14. Differential Item and Person Functioning in Large-Scale Writing Assessments within the Context of the SAT®. Research Report 2013-6

    ERIC Educational Resources Information Center

    Engelhard, George, Jr.; Wind, Stefanie A.; Kobrin, Jennifer L.; Chajewski, Michael

    2013-01-01

    The purpose of this study is to illustrate the use of explanatory models based on Rasch measurement theory to detect systematic relationships between student and item characteristics and achievement differences using differential item functioning (DIF), differential group functioning (DGF), and differential person functioning (DPF) techniques. The…

  15. Redshift distortions of galaxy correlation functions

    NASA Technical Reports Server (NTRS)

    Fry, J. N.; Gaztanaga, Enrique

    1994-01-01

    To examine how peculiar velocities can affect the two-, three-, and four-point redshift correlation functions, we evaluate volume-average correlations for configurations that emphasize and minimize redshift distortions for four different volume-limited samples from each of the CfA, SSRS, and IRAS redshift catalogs. We present the results as the correlation length r(sub 0) and power index gamma of the two-point correlations, bar-xi(sub 0) = (r(sub 0)/r)(exp gamma), and as the hierarchical amplitudes of the three- and four-point functions, S(sub 3) = bar-xi(sub 3)/bar-xi(exp 2)(sub 2) and S(sub 4) = bar-xi(sub 4)/bar-xi(exp 3)(sub 2). We find a characteristic distortion for bar-xi(sub 2), the slope gamma is flatter and the correlation length is larger in redshift space than in real space; that is, redshift distortions 'move' correlations from small to large scales. At the largest scales (up to 12 Mpc), the extra power in the redshift distribution is compatible with Omega(exp 4/7)/b approximately equal to 1. We estimate Omega(exp 4/7)/b to be 0.53 +/- 0.15, 1.10 +/- 0.16, and 0.84 +/- 0.45 for the CfA, SSRS, and IRAS catalogs. Higher order correlations bar-xi(sub 3) and bar-xi(sub 4) suffer similar redshift distortions but in such a way that, within the accuracy of our ananlysis, the normalized amplitudes S(sub 3) and S(sub 4) are insensitive to this effect. The hierarchical amplitudes S(sub 3) and S(sub 4) are constant as a function of scale between 1 and 12 Mpc and have similar values in all samples and catalogs, S(sub 3) approximately equal to 2 and S(sub 4) approximately equal to 6, despite the fact that bar-xi(sub 2), bar-xi(sub 3), and bar-xi(sub 4) differ from one sample to another by large factors (up to a factor of 4 in bar-xi(sub 2), 8 for bar-xi(sub 3), and 12 for bar-xi(sub 4)). The agreement between the independent estimations of S(sub 3) and S(sub 4) is remarkable given the different criteria in the selection of galaxies and also the difference in the

  16. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  17. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  18. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  19. Cultural norm fulfillment, interpersonal belonging, or getting ahead? A large-scale cross-cultural test of three perspectives on the function of self-esteem.

    PubMed

    Gebauer, Jochen E; Sedikides, Constantine; Wagner, Jenny; Bleidorn, Wiebke; Rentfrow, Peter J; Potter, Jeff; Gosling, Samuel D

    2015-09-01

    What is the function of self-esteem? We classified relevant theoretical work into 3 perspectives. The cultural norm-fulfillment perspective regards self-esteem a result of adherence to cultural norms. The interpersonal-belonging perspective regards self-esteem as a sociometer of interpersonal belonging. The getting-ahead perspective regards self-esteem as a sociometer of getting ahead in the social world, while regarding low anxiety/neuroticism as a sociometer of getting along with others. The 3 perspectives make contrasting predictions on the relation between the Big Five personality traits and self-esteem across cultures. We tested these predictions in a self-report study (2,718,838 participants from 106 countries) and an informant-report study (837,655 informants from 64 countries). We obtained some evidence for cultural norm fulfillment, but the effect size was small. Hence, this perspective does not satisfactorily account for self-esteem's function. We found a strong relation between Extraversion and higher self-esteem, but no such relation between Agreeableness and self-esteem. These 2 traits are pillars of interpersonal belonging. Hence, the results do not fit the interpersonal-belonging perspective either. However, the results closely fit the getting-ahead perspective. The relation between Extraversion and higher self-esteem is consistent with this perspective, because Extraversion is the Big Five driver for getting ahead in the social world. The relation between Agreeableness and lower neuroticism is also consistent with this perspective, because Agreeableness is the Big Five driver for getting along with others.

  20. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  1. A multi-ingredient dietary supplement abolishes large-scale brain cell loss, improves sensory function, and prevents neuronal atrophy in aging mice.

    PubMed

    Lemon, J A; Aksenov, V; Samigullina, R; Aksenov, S; Rodgers, W H; Rollo, C D; Boreham, D R

    2016-06-01

    Transgenic growth hormone mice (TGM) are a recognized model of accelerated aging with characteristics including chronic oxidative stress, reduced longevity, mitochondrial dysfunction, insulin resistance, muscle wasting, and elevated inflammatory processes. Growth hormone/IGF-1 activate the Target of Rapamycin known to promote aging. TGM particularly express severe cognitive decline. We previously reported that a multi-ingredient dietary supplement (MDS) designed to offset five mechanisms associated with aging extended longevity, ameliorated cognitive deterioration and significantly reduced age-related physical deterioration in both normal mice and TGM. Here we report that TGM lose more than 50% of cells in midbrain regions, including the cerebellum and olfactory bulb. This is comparable to severe Alzheimer's disease and likely explains their striking age-related cognitive impairment. We also demonstrate that the MDS completely abrogates this severe brain cell loss, reverses cognitive decline and augments sensory and motor function in aged mice. Additionally, histological examination of retinal structure revealed markers consistent with higher numbers of photoreceptor cells in aging and supplemented mice. We know of no other treatment with such efficacy, highlighting the potential for prevention or amelioration of human neuropathologies that are similarly associated with oxidative stress, inflammation and cellular dysfunction. Environ. Mol. Mutagen. 57:382-404, 2016. © 2016 Wiley Periodicals, Inc. PMID:27199101

  2. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  3. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  4. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  5. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  6. Contamination cannot explain the lack of large-scale power in the cosmic microwave background radiation

    SciTech Connect

    Bunn, Emory F.; Bourdon, Austin

    2008-12-15

    Several anomalies appear to be present in the large-angle cosmic microwave background (CMB) anisotropy maps of the Wilkinson Microwave Anisotropy Probe. One of these is a lack of large-scale power. Because the data otherwise match standard models extremely well, it is natural to consider perturbations of the standard model as possible explanations. We show that, as long as the source of the perturbation is statistically independent of the source of the primary CMB anisotropy, no such model can explain this large-scale power deficit. On the contrary, any such perturbation always reduces the probability of obtaining any given low value of large-scale power. We rigorously prove this result when the lack of large-scale power is quantified with a quadratic statistic, such as the quadrupole moment. When a statistic based on the integrated square of the correlation function is used instead, we present strong numerical evidence in support of the result. The result applies to models in which the geometry of spacetime is perturbed (e.g., an ellipsoidal universe) as well as explanations involving local contaminants, undiagnosed foregrounds, or systematic errors. Because the large-scale power deficit is arguably the most significant of the observed anomalies, explanations that worsen this discrepancy should be regarded with great skepticism, even if they help in explaining other anomalies such as multipole alignments.

  7. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  8. Omega from the anisotropy of the redshift correlation function

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    Peculiar velocities distort the correlation function of galaxies observed in redshift space. In the large scale, linear regime, the distortion takes a characteristic quadrupole plus hexadecapole form, with the amplitude of the distortion depending on the cosmological density parameter omega. Preliminary measurements are reported here of the harmonics of the correlation function in the CfA, SSRS, and IRAS 2 Jansky redshift surveys. The observed behavior of the harmonics agrees qualitatively with the predictions of linear theory on large scales in every survey. However, real anisotropy in the galaxy distribution induces large fluctuations in samples which do not yet probe a sufficiently fair volume of the Universe. In the CfA 14.5 sample in particular, the Great Wall induces a large negative quadrupole, which taken at face value implies an unrealistically large omega 20. The IRAS 2 Jy survey, which covers a substantially larger volume than the optical surveys and is less affected by fingers-of-god, yields a more reliable and believable value, omega = 0.5 sup +.5 sub -.25.

  9. Correlation, functional analysis and optical pattern recognition

    SciTech Connect

    Dickey, F.M.; Lee, M.L.; Stalker, K.T.

    1994-03-01

    Correlation integrals have played a central role in optical pattern recognition. The success of correlation, however, has been limited. What is needed is a mathematical operation more complex than correlation. Suitably complex operations are the functionals defined on the Hilbert space of Lebesgue square integrable functions. Correlation is a linear functional of a parameter. In this paper, we develop a representation of functionals in terms of inner products or equivalently correlation functions. We also discuss the role of functionals in neutral networks. Having established a broad relation of correlation to pattern recognition, we discuss the computation of correlation functions using acousto-optics.

  10. Boundary anomalies and correlation functions

    NASA Astrophysics Data System (ADS)

    Huang, Kuo-Wei

    2016-08-01

    It was shown recently that boundary terms of conformal anomalies recover the universal contribution to the entanglement entropy and also play an important role in the boundary monotonicity theorem of odd-dimensional quantum field theories. Motivated by these results, we investigate relationships between boundary anomalies and the stress tensor correlation functions in conformal field theories. In particular, we focus on how the conformal Ward identity and the renormalization group equation are modified by boundary central charges. Renormalized stress tensors induced by boundary Weyl invariants are also discussed, with examples in spherical and cylindrical geometries.

  11. Modeling of Large-Scale Functional Brain Networks Based on Structural Connectivity from DTI: Comparison with EEG Derived Phase Coupling Networks and Evaluation of Alternative Methods along the Modeling Path

    PubMed Central

    Cheng, Bastian; Messé, Arnaud; Thomalla, Götz; Gerloff, Christian; König, Peter

    2016-01-01

    In this study, we investigate if phase-locking of fast oscillatory activity relies on the anatomical skeleton and if simple computational models informed by structural connectivity can help further to explain missing links in the structure-function relationship. We use diffusion tensor imaging data and alpha band-limited EEG signal recorded in a group of healthy individuals. Our results show that about 23.4% of the variance in empirical networks of resting-state functional connectivity is explained by the underlying white matter architecture. Simulating functional connectivity using a simple computational model based on the structural connectivity can increase the match to 45.4%. In a second step, we use our modeling framework to explore several technical alternatives along the modeling path. First, we find that an augmentation of homotopic connections in the structural connectivity matrix improves the link to functional connectivity while a correction for fiber distance slightly decreases the performance of the model. Second, a more complex computational model based on Kuramoto oscillators leads to a slight improvement of the model fit. Third, we show that the comparison of modeled and empirical functional connectivity at source level is much more specific for the underlying structural connectivity. However, different source reconstruction algorithms gave comparable results. Of note, as the fourth finding, the model fit was much better if zero-phase lag components were preserved in the empirical functional connectome, indicating a considerable amount of functionally relevant synchrony taking place with near zero or zero-phase lag. The combination of the best performing alternatives at each stage in the pipeline results in a model that explains 54.4% of the variance in the empirical EEG functional connectivity. Our study shows that large-scale brain circuits of fast neural network synchrony strongly rely upon the structural connectome and simple computational

  12. Modeling of Large-Scale Functional Brain Networks Based on Structural Connectivity from DTI: Comparison with EEG Derived Phase Coupling Networks and Evaluation of Alternative Methods along the Modeling Path.

    PubMed

    Finger, Holger; Bönstrup, Marlene; Cheng, Bastian; Messé, Arnaud; Hilgetag, Claus; Thomalla, Götz; Gerloff, Christian; König, Peter

    2016-08-01

    In this study, we investigate if phase-locking of fast oscillatory activity relies on the anatomical skeleton and if simple computational models informed by structural connectivity can help further to explain missing links in the structure-function relationship. We use diffusion tensor imaging data and alpha band-limited EEG signal recorded in a group of healthy individuals. Our results show that about 23.4% of the variance in empirical networks of resting-state functional connectivity is explained by the underlying white matter architecture. Simulating functional connectivity using a simple computational model based on the structural connectivity can increase the match to 45.4%. In a second step, we use our modeling framework to explore several technical alternatives along the modeling path. First, we find that an augmentation of homotopic connections in the structural connectivity matrix improves the link to functional connectivity while a correction for fiber distance slightly decreases the performance of the model. Second, a more complex computational model based on Kuramoto oscillators leads to a slight improvement of the model fit. Third, we show that the comparison of modeled and empirical functional connectivity at source level is much more specific for the underlying structural connectivity. However, different source reconstruction algorithms gave comparable results. Of note, as the fourth finding, the model fit was much better if zero-phase lag components were preserved in the empirical functional connectome, indicating a considerable amount of functionally relevant synchrony taking place with near zero or zero-phase lag. The combination of the best performing alternatives at each stage in the pipeline results in a model that explains 54.4% of the variance in the empirical EEG functional connectivity. Our study shows that large-scale brain circuits of fast neural network synchrony strongly rely upon the structural connectome and simple computational

  13. Is the universe homogeneous on large scale?

    NASA Astrophysics Data System (ADS)

    Zhu, Xingfen; Chu, Yaoquan

    Wether the distribution of matter in the universe is homogeneous or fractal on large scale is vastly debated in observational cosmology recently. Pietronero and his co-workers have strongly advocated that the fractal behaviour in the galaxy distribution extends to the largest scale observed (≍1000h-1Mpc) with the fractal dimension D ≍ 2. Most cosmologists who hold the standard model, however, insist that the universe be homogeneous on large scale. The answer of whether the universe is homogeneous or not on large scale should wait for the new results of next generation galaxy redshift surveys.

  14. Characterization of maximally random jammed sphere packings: Voronoi correlation functions

    NASA Astrophysics Data System (ADS)

    Klatt, Michael A.; Torquato, Salvatore

    2014-11-01

    We characterize the structure of maximally random jammed (MRJ) sphere packings by computing the Minkowski functionals (volume, surface area, and integrated mean curvature) of their associated Voronoi cells. The probability distribution functions of these functionals of Voronoi cells in MRJ sphere packings are qualitatively similar to those of an equilibrium hard-sphere liquid and partly even to the uncorrelated Poisson point process, implying that such local statistics are relatively structurally insensitive. This is not surprising because the Minkowski functionals of a single Voronoi cell incorporate only local information and are insensitive to global structural information. To improve upon this, we introduce descriptors that incorporate nonlocal information via the correlation functions of the Minkowski functionals of two cells at a given distance as well as certain cell-cell probability density functions. We evaluate these higher-order functions for our MRJ packings as well as equilibrium hard spheres and the Poisson point process. It is shown that these Minkowski correlation and density functions contain visibly more information than the corresponding standard pair-correlation functions. We find strong anticorrelations in the Voronoi volumes for the hyperuniform MRJ packings, consistent with previous findings for other pair correlations [A. Donev et al., Phys. Rev. Lett. 95, 090604 (2005), 10.1103/PhysRevLett.95.090604], indicating that large-scale volume fluctuations are suppressed by accompanying large Voronoi cells with small cells, and vice versa. In contrast to the aforementioned local Voronoi statistics, the correlation functions of the Voronoi cells qualitatively distinguish the structure of MRJ sphere packings (prototypical glasses) from that of not only the Poisson point process but also the correlated equilibrium hard-sphere liquids. Moreover, while we did not find any perfect icosahedra (the locally densest possible structure in which a central

  15. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  16. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  17. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  18. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  19. Functional Multiple-Set Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.

    2012-01-01

    We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…

  20. The IR-resummed Effective Field Theory of Large Scale Structures

    SciTech Connect

    Senatore, Leonardo; Zaldarriaga, Matias E-mail: matiasz@ias.edu

    2015-02-01

    We present a new method to resum the effect of large scale motions in the Effective Field Theory of Large Scale Structures. Because the linear power spectrum in ΛCDM is not scale free the effects of the large scale flows are enhanced. Although previous EFT calculations of the equal-time density power spectrum at one and two loops showed a remarkable agreement with numerical results, they also showed a 2% residual which appeared related to the BAO oscillations. We show that this was indeed the case, explain the physical origin and show how a Lagrangian based calculation removes this differences. We propose a simple method to upgrade existing Eulerian calculations to effectively make them Lagrangian and compare the new results with existing fits to numerical simulations. Our new two-loop results agrees with numerical results up to k∼ 0.6 h Mpc{sup −1} to within 1% with no oscillatory residuals. We also compute power spectra involving momentum which is significantly more affected by the large scale flows. We show how keeping track of these velocities significantly enhances the UV reach of the momentum power spectrum in addition to removing the BAO related residuals. We compute predictions for the real space correlation function around the BAO scale and investigate its sensitivity to the EFT parameters and the details of the resummation technique.

  1. A novel computational approach towards the certification of large-scale boson sampling

    NASA Astrophysics Data System (ADS)

    Huh, Joonsuk

    Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.

  2. Large-scale motions in the universe

    SciTech Connect

    Rubin, V.C.; Coyne, G.V.

    1988-01-01

    The present conference on the large-scale motions of the universe discusses topics on the problems of two-dimensional and three-dimensional structures, large-scale velocity fields, the motion of the local group, small-scale microwave fluctuations, ab initio and phenomenological theories, and properties of galaxies at high and low Z. Attention is given to the Pisces-Perseus supercluster, large-scale structure and motion traced by galaxy clusters, distances to galaxies in the field, the origin of the local flow of galaxies, the peculiar velocity field predicted by the distribution of IRAS galaxies, the effects of reionization on microwave background anisotropies, the theoretical implications of cosmological dipoles, and n-body simulations of universe dominated by cold dark matter.

  3. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  4. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  5. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  6. Loops in inflationary correlation functions

    NASA Astrophysics Data System (ADS)

    Tanaka, Takahiro; Urakawa, Yuko

    2013-12-01

    We review the recent progress regarding the loop corrections to the correlation functions in the inflationary universe. A naive perturbation theory predicts that the loop corrections generated during inflation suffer from various infrared (IR) pathologies. Introducing an IR cutoff by hand is neither satisfactory nor enough to fix the problem of a secular growth, which may ruin the predictive power of inflation models if the inflation lasts sufficiently long. We discuss the origin of the IR divergences and explore the regularity conditions of the loop corrections for the adiabatic perturbation, the iso-curvature perturbation, and the tensor perturbation, in turn. These three kinds of perturbations have qualitative differences, but in discussing the IR regularity there is a feature common to all cases, which is the importance of the proper identification of observable quantities. Genuinely, observable quantities should respect the gauge invariance from the view point of a local observer. Interestingly, we find that the requirement of the IR regularity restricts the allowed quantum states.

  7. Large Scale Shape Optimization for Accelerator Cavities

    SciTech Connect

    Akcelik, Volkan; Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Xiao, Li-Ling; Ko, Kwok; /SLAC

    2011-12-06

    We present a shape optimization method for designing accelerator cavities with large scale computations. The objective is to find the best accelerator cavity shape with the desired spectral response, such as with the specified frequencies of resonant modes, field profiles, and external Q values. The forward problem is the large scale Maxwell equation in the frequency domain. The design parameters are the CAD parameters defining the cavity shape. We develop scalable algorithms with a discrete adjoint approach and use the quasi-Newton method to solve the nonlinear optimization problem. Two realistic accelerator cavity design examples are presented.

  8. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  9. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-10-20

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048{sup 3} dark matter particles, 2048{sup 3} gas cells, and 17 billion adaptive rays in a L = 100 Mpc h {sup –1} box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h {sup –1}). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h {sup –1}) in order to make mock observations and theoretical predictions.

  10. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  11. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  12. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  13. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  14. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  15. Errors on interrupter tasks presented during spatial and verbal working memory performance are linearly linked to large-scale functional network connectivity in high temporal resolution resting state fMRI.

    PubMed

    Magnuson, Matthew Evan; Thompson, Garth John; Schwarb, Hillary; Pan, Wen-Ju; McKinley, Andy; Schumacher, Eric H; Keilholz, Shella Dawn

    2015-12-01

    The brain is organized into networks composed of spatially separated anatomical regions exhibiting coherent functional activity over time. Two of these networks (the default mode network, DMN, and the task positive network, TPN) have been implicated in the performance of a number of cognitive tasks. To directly examine the stable relationship between network connectivity and behavioral performance, high temporal resolution functional magnetic resonance imaging (fMRI) data were collected during the resting state, and behavioral data were collected from 15 subjects on different days, exploring verbal working memory, spatial working memory, and fluid intelligence. Sustained attention performance was also evaluated in a task interleaved between resting state scans. Functional connectivity within and between the DMN and TPN was related to performance on these tasks. Decreased TPN resting state connectivity was found to significantly correlate with fewer errors on an interrupter task presented during a spatial working memory paradigm and decreased DMN/TPN anti-correlation was significantly correlated with fewer errors on an interrupter task presented during a verbal working memory paradigm. A trend for increased DMN resting state connectivity to correlate to measures of fluid intelligence was also observed. These results provide additional evidence of the relationship between resting state networks and behavioral performance, and show that such results can be observed with high temporal resolution fMRI. Because cognitive scores and functional connectivity were collected on nonconsecutive days, these results highlight the stability of functional connectivity/cognitive performance coupling. PMID:25563228

  16. Bias in the effective field theory of large scale structures

    SciTech Connect

    Senatore, Leonardo

    2015-11-01

    We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local in space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. We describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/k{sub NL} and k/k{sub M}, where k is the wavenumber of interest, k{sub NL} is the wavenumber associated to the non-linear scale, and k{sub M} is the comoving wavenumber enclosing the mass of a galaxy.

  17. Bias in the effective field theory of large scale structures

    SciTech Connect

    Senatore, Leonardo

    2015-11-05

    We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local in space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/kNL and k/kM, where k is the wavenumber of interest, kNL is the wavenumber associated to the non-linear scale, and kM is the comoving wavenumber enclosing the mass of a galaxy.

  18. Bias in the effective field theory of large scale structures

    DOE PAGES

    Senatore, Leonardo

    2015-11-05

    We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local inmore » space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/kNL and k/kM, where k is the wavenumber of interest, kNL is the wavenumber associated to the non-linear scale, and kM is the comoving wavenumber enclosing the mass of a galaxy.« less

  19. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  20. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  1. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  2. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  3. Large-scale neuromorphic computing systems.

    PubMed

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers. PMID:27529195

  4. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  5. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  6. Testing gravity using large-scale redshift-space distortions

    NASA Astrophysics Data System (ADS)

    Raccanelli, Alvise; Bertacca, Daniele; Pietrobon, Davide; Schmidt, Fabian; Samushia, Lado; Bartolo, Nicola; Doré, Olivier; Matarrese, Sabino; Percival, Will J.

    2013-11-01

    We use luminous red galaxies from the Sloan Digital Sky Survey (SDSS) II to test the cosmological structure growth in two alternatives to the standard Λ cold dark matter (ΛCDM)+general relativity (GR) cosmological model. We compare observed three-dimensional clustering in SDSS Data Release 7 (DR7) with theoretical predictions for the standard vanilla ΛCDM+GR model, unified dark matter (UDM) cosmologies and the normal branch Dvali-Gabadadze-Porrati (nDGP). In computing the expected correlations in UDM cosmologies, we derive a parametrized formula for the growth factor in these models. For our analysis we apply the methodology tested in Raccanelli et al. and use the measurements of Samushia et al. that account for survey geometry, non-linear and wide-angle effects and the distribution of pair orientation. We show that the estimate of the growth rate is potentially degenerate with wide-angle effects, meaning that extremely accurate measurements of the growth rate on large scales will need to take such effects into account. We use measurements of the zeroth and second-order moments of the correlation function from SDSS DR7 data and the Large Suite of Dark Matter Simulations (LasDamas), and perform a likelihood analysis to constrain the parameters of the models. Using information on the clustering up to rmax = 120 h-1 Mpc, and after marginalizing over the bias, we find, for UDM models, a speed of sound c∞ ≤ 6.1e-4, and, for the nDGP model, a cross-over scale rc ≥ 340 Mpc, at 95 per cent confidence level.

  7. Large-scale climatic control on European precipitation

    NASA Astrophysics Data System (ADS)

    Lavers, David; Prudhomme, Christel; Hannah, David

    2010-05-01

    Precipitation variability has a significant impact on society. Sectors such as agriculture and water resources management are reliant on predictable and reliable precipitation supply with extreme variability having potentially adverse socio-economic impacts. Therefore, understanding the climate drivers of precipitation is of human relevance. This research examines the strength, location and seasonality of links between precipitation and large-scale Mean Sea Level Pressure (MSLP) fields across Europe. In particular, we aim to evaluate whether European precipitation is correlated with the same atmospheric circulation patterns or if there is a strong spatial and/or seasonal variation in the strength and location of centres of correlations. The work exploits time series of gridded ERA-40 MSLP on a 2.5˚×2.5˚ grid (0˚N-90˚N and 90˚W-90˚E) and gridded European precipitation from the Ensemble project on a 0.5°×0.5° grid (36.25˚N-74.25˚N and 10.25˚W-24.75˚E). Monthly Spearman rank correlation analysis was performed between MSLP and precipitation. During winter, a significant MSLP-precipitation correlation dipole pattern exists across Europe. Strong negative (positive) correlation located near the Icelandic Low and positive (negative) correlation near the Azores High pressure centres are found in northern (southern) Europe. These correlation dipoles resemble the structure of the North Atlantic Oscillation (NAO). The reversal in the correlation dipole patterns occurs at the latitude of central France, with regions to the north (British Isles, northern France, Scandinavia) having a positive relationship with the NAO, and regions to the south (Italy, Portugal, southern France, Spain) exhibiting a negative relationship with the NAO. In the lee of mountain ranges of eastern Britain and central Sweden, correlation with North Atlantic MSLP is reduced, reflecting a reduced influence of westerly flow on precipitation generation as the mountains act as a barrier to moist

  8. RECONSTRUCTING THE SHAPE OF THE CORRELATION FUNCTION

    SciTech Connect

    Huffenberger, K. M.; Galeazzi, M.; Ursino, E.

    2013-06-01

    We develop an estimator for the correlation function which, in the ensemble average, returns the shape of the correlation function, even for signals that have significant correlations on the scale of the survey region. Our estimator is general and works in any number of dimensions. We develop versions of the estimator for both diffuse and discrete signals. As an application, we apply them to Monte Carlo simulations of X-ray background measurements. These include a realistic, spatially inhomogeneous population of spurious detector events. We discuss applying the estimator to the averaging of correlation functions evaluated on several small fields, and to other cosmological applications.

  9. The Effective Field Theory of Large Scale Structures at two loops

    SciTech Connect

    Carrasco, John Joseph M.; Foreman, Simon; Green, Daniel; Senatore, Leonardo E-mail: sfore@stanford.edu E-mail: senatore@stanford.edu

    2014-07-01

    Large scale structure surveys promise to be the next leading probe of cosmological information. It is therefore crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbation theory for the weakly non-linear regime of dark matter, where correlation functions are computed in an expansion of the wavenumber k of a mode over the wavenumber associated with the non-linear scale k{sub NL}. Since most of the information is contained at high wavenumbers, it is necessary to compute higher order corrections to correlation functions. After the one-loop correction to the matter power spectrum, we estimate that the next leading one is the two-loop contribution, which we compute here. At this order in k/k{sub NL}, there is only one counterterm in the EFTofLSS that must be included, though this term contributes both at tree-level and in several one-loop diagrams. We also discuss correlation functions involving the velocity and momentum fields. We find that the EFTofLSS prediction at two loops matches to percent accuracy the non-linear matter power spectrum at redshift zero up to k∼ 0.6 h Mpc{sup −1}, requiring just one unknown coefficient that needs to be fit to observations. Given that Standard Perturbation Theory stops converging at redshift zero at k∼ 0.1 h Mpc{sup −1}, our results demonstrate the possibility of accessing a factor of order 200 more dark matter quasi-linear modes than naively expected. If the remaining observational challenges to accessing these modes can be addressed with similar success, our results show that there is tremendous potential for large scale structure surveys to explore the primordial universe.

  10. Radial and angular correlations of two excited electrons. IV. Comparison of configuration-interaction wave functions with the group-theoretical basis functions

    NASA Astrophysics Data System (ADS)

    Lin, C. D.; Macek, J. H.

    1984-05-01

    Doubly-excited-state basis (DESB) functions of Herrick and Sinanoǧlu are compared with the large-scale configuration-interaction (CI) wave functions of Lipsky et al., and with the adiabatic channel functions in hyperspherical coordinates. It is shown that DESB functions will represent those states where the mean value of θ12 is large. Owing to the absence of intershell correlations, and a consequent underestimation of radial correlations, the DESB functions give excessive concentrations near θ12=0 for other, less sharply correlated in angle, states.

  11. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  12. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  13. Large-scale dynamics and global warming

    SciTech Connect

    Held, I.M. )

    1993-02-01

    Predictions of future climate change raise a variety of issues in large-scale atmospheric and oceanic dynamics. Several of these are reviewed in this essay, including the sensitivity of the circulation of the Atlantic Ocean to increasing freshwater input at high latitudes; the possibility of greenhouse cooling in the southern oceans; the sensitivity of monsoonal circulations to differential warming of the two hemispheres; the response of midlatitude storms to changing temperature gradients and increasing water vapor in the atmosphere; and the possible importance of positive feedback between the mean winds and eddy-induced heating in the polar stratosphere.

  14. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  15. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  16. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  17. A new technique for the detection of large scale landslides in glacio-lacustrine deposits using image correlation based upon aerial imagery: A case study from the French Alps

    NASA Astrophysics Data System (ADS)

    Fernandez, Paz; Whitworth, Malcolm

    2016-10-01

    Landslide monitoring has benefited from recent advances in the use of image correlation of high resolution optical imagery. However, this approach has typically involved satellite imagery that may not be available for all landslides depending on their time of movement and location. This study has investigated the application of image correlation techniques applied to a sequence of aerial imagery to an active landslide in the French Alps. We apply an indirect landslide monitoring technique (COSI-Corr) based upon the cross-correlation between aerial photographs, to obtain horizontal displacement rates. Results for the 2001-2003 time interval are presented, providing a spatial model of landslide activity and motion across the landslide, which is consistent with previous studies. The study has identified areas of new landslide activity in addition to known areas and through image decorrelation has identified and mapped two new lateral landslides within the main landslide complex. This new approach for landslide monitoring is likely to be of wide applicability to other areas characterised by complex ground displacements.

  18. Constructing perturbation theory kernels for large-scale structure in generalized cosmologies

    NASA Astrophysics Data System (ADS)

    Taruya, Atsushi

    2016-07-01

    We present a simple numerical scheme for perturbation theory (PT) calculations of large-scale structure. Solving the evolution equations for perturbations numerically, we construct the PT kernels as building blocks of statistical calculations, from which the power spectrum and/or correlation function can be systematically computed. The scheme is especially applicable to the generalized structure formation including modified gravity, in which the analytic construction of PT kernels is intractable. As an illustration, we show several examples for power spectrum calculations in f (R ) gravity and Λ CDM models.

  19. FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE

    SciTech Connect

    Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing; McBride, Cameron K.

    2010-01-01

    Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.

  20. How Large Scales Flows May Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  1. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  2. The Large Scale Synthesis of Aligned Plate Nanostructures

    PubMed Central

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  3. The Large Scale Synthesis of Aligned Plate Nanostructures

    NASA Astrophysics Data System (ADS)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  4. Large-Scale Fusion of Gray Matter and Resting-State Functional MRI Reveals Common and Distinct Biological Markers across the Psychosis Spectrum in the B-SNIP Cohort.

    PubMed

    Wang, Zheng; Meda, Shashwath A; Keshavan, Matcheri S; Tamminga, Carol A; Sweeney, John A; Clementz, Brett A; Schretlen, David J; Calhoun, Vince D; Lui, Su; Pearlson, Godfrey D

    2015-01-01

    To investigate whether aberrant interactions between brain structure and function present similarly or differently across probands with psychotic illnesses [schizophrenia (SZ), schizoaffective disorder (SAD), and bipolar I disorder with psychosis (BP)] and whether these deficits are shared with their first-degree non-psychotic relatives. A total of 1199 subjects were assessed, including 220 SZ, 147 SAD, 180 psychotic BP, 150 first-degree relatives of SZ, 126 SAD relatives, 134 BP relatives, and 242 healthy controls (1). All subjects underwent structural MRI (sMRI) and resting-state functional MRI (rs-fMRI) scanning. Joint-independent component analysis (jICA) was used to fuse sMRI gray matter and rs-fMRI amplitude of low-frequency fluctuations data to identify the relationship between the two modalities. jICA revealed two significantly fused components. The association between functional brain alteration in a prefrontal-striatal-thalamic-cerebellar network and structural abnormalities in the default mode network was found to be common across psychotic diagnoses and correlated with cognitive function, social function, and schizo-bipolar scale scores. The fused alteration in the temporal lobe was unique to SZ and SAD. The above effects were not seen in any relative group (including those with cluster-A personality). Using a multivariate-fused approach involving two widely used imaging markers, we demonstrate both shared and distinct biological traits across the psychosis spectrum. Furthermore, our results suggest that the above traits are psychosis biomarkers rather than endophenotypes. PMID:26732139

  5. Time-sliced perturbation theory for large scale structure I: general formalism

    NASA Astrophysics Data System (ADS)

    Blas, Diego; Garny, Mathias; Ivanov, Mikhail M.; Sibiryakov, Sergey

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.

  6. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill.

  7. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. PMID:7713029

  8. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  9. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  10. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  11. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  12. Large scale structures in transitional pipe flow

    NASA Astrophysics Data System (ADS)

    Hellström, Leo; Ganapathisubramani, Bharathram; Smits, Alexander

    2015-11-01

    We present a dual-plane snapshot POD analysis of transitional pipe flow at a Reynolds number of 3440, based on the pipe diameter. The time-resolved high-speed PIV data were simultaneously acquired in two planes, a cross-stream plane (2D-3C) and a streamwise plane (2D-2C) on the pipe centerline. The two light sheets were orthogonally polarized, allowing particles situated in each plane to be viewed independently. In the snapshot POD analysis, the modal energy is based on the cross-stream plane, while the POD modes are calculated using the dual-plane data. We present results on the emergence and decay of the energetic large scale motions during transition to turbulence, and compare these motions to those observed in fully developed turbulent flow. Supported under ONR Grant N00014-13-1-0174 and ERC Grant No. 277472.

  13. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes

    2005-01-01

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  14. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  15. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  16. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  17. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  18. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  19. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  20. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  1. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  2. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  3. Multivariate Clustering of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  4. Multivariate Clustering of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-03-04

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  5. Correlation functions along a massless flow

    SciTech Connect

    Delfino, G.; Mussardo, G.; Simonetti, P.

    1995-06-15

    A nonperturbative method based on the form factor bootstrap approach is proposed for the analysis of correlation functions of two-dimensional massless integrable theories and applied to the massless flow between the tricritical Ising and the critical Ising models.

  6. On the measurability of quantum correlation functions

    SciTech Connect

    Lima Bernardo, Bertúlio de Azevedo, Sérgio; Rosas, Alexandre

    2015-05-15

    The concept of correlation function is widely used in classical statistical mechanics to characterize how two or more variables depend on each other. In quantum mechanics, on the other hand, there are observables that cannot be measured at the same time; the so-called incompatible observables. This prospect imposes a limitation on the definition of a quantum analog for the correlation function in terms of a sequence of measurements. Here, based on the notion of sequential weak measurements, we circumvent this limitation by introducing a framework to measure general quantum correlation functions, in principle, independently of the state of the system and the operators involved. To illustrate, we propose an experimental configuration to obtain explicitly the quantum correlation function between two Pauli operators, in which the input state is an arbitrary mixed qubit state encoded on the polarization of photons.

  7. On the measurability of quantum correlation functions

    NASA Astrophysics Data System (ADS)

    de Lima Bernardo, Bertúlio; Azevedo, Sérgio; Rosas, Alexandre

    2015-05-01

    The concept of correlation function is widely used in classical statistical mechanics to characterize how two or more variables depend on each other. In quantum mechanics, on the other hand, there are observables that cannot be measured at the same time; the so-called incompatible observables. This prospect imposes a limitation on the definition of a quantum analog for the correlation function in terms of a sequence of measurements. Here, based on the notion of sequential weak measurements, we circumvent this limitation by introducing a framework to measure general quantum correlation functions, in principle, independently of the state of the system and the operators involved. To illustrate, we propose an experimental configuration to obtain explicitly the quantum correlation function between two Pauli operators, in which the input state is an arbitrary mixed qubit state encoded on the polarization of photons.

  8. Spatial Correlation Function of the Chandra Selected Active Galactic Nuclei

    NASA Technical Reports Server (NTRS)

    Yang, Y.; Mushotzky, R. F.; Barger, A. J.; Cowie, L. L.

    2006-01-01

    two groups. We have also found that the correlation between X-ray luminosity and clustering amplitude is weak, which, however, is fully consistent with the expectation using the simplest relations between X-ray luminosity, black hole mass, and dark halo mass. We study the evolution of the AGN clustering by dividing the samples into 4 redshift bins over 0.1 Mpc< z <3.0 Mpc. We find a very mild evolution in the clustering amplitude, which show the same evolution trend found in optically selected quasars in the 2dF survey. We estimate the evolution of the bias, and find that the bias increases rapidly with redshift (b(z = 0.45) = 0.95 +/- 0.15 and b(z = 2.07) = 3.03 +/- 0.83): The typical mass of the dark matter halo derived from the bias estimates show little change with redshift. The average halo mass is found to be log (M(sub halo)/M(sun))approximates 12.1. Subject headings: cosmology: observations - large-scale structure of the universe - x-rays: diffuse background - galaxies: nuclei

  9. Statistical analysis of large-scale neuronal recording data

    PubMed Central

    Reed, Jamie L.; Kaas, Jon H.

    2010-01-01

    Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings. PMID:20472395

  10. Multitime correlation functions in nonclassical stochastic processes

    NASA Astrophysics Data System (ADS)

    Krumm, F.; Sperling, J.; Vogel, W.

    2016-06-01

    A general method is introduced for verifying multitime quantum correlations through the characteristic function of the time-dependent P functional that generalizes the Glauber-Sudarshan P function. Quantum correlation criteria are derived which identify quantum effects for an arbitrary number of points in time. The Magnus expansion is used to visualize the impact of the required time ordering, which becomes crucial in situations when the interaction problem is explicitly time dependent. We show that the latter affects the multi-time-characteristic function and, therefore, the temporal evolution of the nonclassicality. As an example, we apply our technique to an optical parametric process with a frequency mismatch. The resulting two-time-characteristic function yields full insight into the two-time quantum correlation properties of such a system.

  11. Generalized hydrodynamic correlations and fractional memory functions

    NASA Astrophysics Data System (ADS)

    Rodríguez, Rosalio F.; Fujioka, Jorge

    2015-12-01

    A fractional generalized hydrodynamic (GH) model of the longitudinal velocity fluctuations correlation, and its associated memory function, for a complex fluid is analyzed. The adiabatic elimination of fast variables introduces memory effects in the transport equations, and the dynamic of the fluctuations is described by a generalized Langevin equation with long-range noise correlations. These features motivate the introduction of Caputo time fractional derivatives and allows us to calculate analytic expressions for the fractional longitudinal velocity correlation function and its associated memory function. Our analysis eliminates a spurious constant term in the non-fractional memory function found in the non-fractional description. It also produces a significantly slower power-law decay of the memory function in the GH regime that reduces to the well-known exponential decay in the non-fractional Navier-Stokes limit.

  12. On the velocity in the Effective Field Theory of Large Scale Structures

    NASA Astrophysics Data System (ADS)

    Mercolli, Lorenzo; Pajer, Enrico

    2014-03-01

    We compute the renormalized two-point functions of density, divergence and vorticity of the velocity in the Effective Field Theory of Large Scale Structures. Because of momentum and mass conservation, the corrections from short scales to the large-scale power spectra of density, divergence and vorticity must start at order k4. For the vorticity this constitutes one of the two leading terms. Exact (approximated) self-similarity of an Einstein-de Sitter (ΛCDM) background fixes the time dependence so that the vorticity power spectrum at leading order is determined by the symmetries of the problem and the power spectrum around the non-linear scale. We show that to cancel all divergences in the velocity correlators one needs new counterterms. These fix the definition of velocity and do not represent new properties of the system. For an Einstein-de Sitter universe, we show that all three renormalized cross- and auto-correlation functions have the same structure but different numerical coefficients, which we compute. We elucidate the differences between using momentum and velocity.

  13. On the velocity in the Effective Field Theory of Large Scale Structures

    SciTech Connect

    Mercolli, Lorenzo; Pajer, Enrico E-mail: enrico.pajer@gmail.com

    2014-03-01

    We compute the renormalized two-point functions of density, divergence and vorticity of the velocity in the Effective Field Theory of Large Scale Structures. Because of momentum and mass conservation, the corrections from short scales to the large-scale power spectra of density, divergence and vorticity must start at order k{sup 4}. For the vorticity this constitutes one of the two leading terms. Exact (approximated) self-similarity of an Einstein-de Sitter (ΛCDM) background fixes the time dependence so that the vorticity power spectrum at leading order is determined by the symmetries of the problem and the power spectrum around the non-linear scale. We show that to cancel all divergences in the velocity correlators one needs new counterterms. These fix the definition of velocity and do not represent new properties of the system. For an Einstein-de Sitter universe, we show that all three renormalized cross- and auto-correlation functions have the same structure but different numerical coefficients, which we compute. We elucidate the differences between using momentum and velocity.

  14. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  15. Density functional theory for pair correlation functions in polymeric liquids

    NASA Astrophysics Data System (ADS)

    Yethiraj, Arun; Fynewever, Herb; Shew, Chwen-Yang

    2001-03-01

    A density functional theory is presented for the pair correlation functions in polymeric liquids. The theory uses the Yethiraj-Woodward free-energy functional for the polymeric liquid, where the ideal gas free-energy functional is treated exactly and the excess free-energy functional is obtained using a weighted density approximation with the simplest choice of the weighting function. Pair correlation functions are obtained using the Percus trick, where the external field is taken to be a single polymer molecule. The minimization of the free energy in the theory requires a two molecule simulation at each iteration. The theory is very accurate for the pair correlation functions in freely jointed tangent-hard-sphere chains and freely rotating fused-hard-sphere chains, especially at low densities and for long chains. In addition, the theory allows the calculation of the virial pressure in these systems and shows a remarkable degree of consistency between the virial and compressibility pressure.

  16. Why large-scale seasonal streamflow forecasts are feasible

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Candogan Yossef, N.; Van Beek, L. P.

    2011-12-01

    Seasonal forecasts of precipitation and temperature, using either statistical or dynamic prediction, have been around for almost 2 decades. The skill of these forecasts differ both in space and time, with highest skill in areas heavily influenced by SST anomalies such as El Nino or areas where land surface properties have a major impact on e.g. Monsoon strength, such as the vegetation cover of the Sahel region or the snow cover of the Tibetan plateau. However, the skill of seasonal forecasts is limited in most regions, with anomaly correlation coefficients varying between 0.2 and 0.5 for 1-3 month precipitation totals. This raises the question whether seasonal hydrological forecasting is feasible. Here, we make the case that it is. Using the example of statistical forecasts of NAO-strength and related precipitation anomalies over Europe, we show that the skill of large-scale streamflow forecasts is generally much higher than the precipitation forecasts itself, provided that the initial state of the system is accurately estimated. In the latter case, even the precipitation climatology can produce skillful results. This is due to the inertia of the hydrological system rooted in the storage of soil moisture, groundwater and snow pack, as corroborated by a recent study using snow observations for seasonal streamflow forecasting in the Western US. These examples seem to suggest that for accurate seasonal hydrological forecasting, correct state estimation is more important than accurate seasonal meteorological forecasts. However, large-scale estimation of hydrological states is difficult and validation of large-scale hydrological models often reveals large biases in e.g. streamflow estimates. Fortunately, as shown with a validation study of the global model PCR-GLOBWB, these biases are of less importance when seasonal forecasts are evaluated in terms of their ability to reproduce anomalous flows and extreme events, i.e. by anomaly correlations or categorical quantile

  17. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  18. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  19. Large-scale clustering of cosmic voids

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  20. Simulations of Large Scale Structures in Cosmology

    NASA Astrophysics Data System (ADS)

    Liao, Shihong

    Large-scale structures are powerful probes for cosmology. Due to the long range and non-linear nature of gravity, the formation of cosmological structures is a very complicated problem. The only known viable solution is cosmological N-body simulations. In this thesis, we use cosmological N-body simulations to study structure formation, particularly dark matter haloes' angular momenta and dark matter velocity field. The origin and evolution of angular momenta is an important ingredient for the formation and evolution of haloes and galaxies. We study the time evolution of the empirical angular momentum - mass relation for haloes to offer a more complete picture about its origin, dependences on cosmological models and nonlinear evolutions. We also show that haloes follow a simple universal specific angular momentum profile, which is useful in modelling haloes' angular momenta. The dark matter velocity field will become a powerful cosmological probe in the coming decades. However, theoretical predictions of the velocity field rely on N-body simulations and thus may be affected by numerical artefacts (e.g. finite box size, softening length and initial conditions). We study how such numerical effects affect the predicted pairwise velocities, and we propose a theoretical framework to understand and correct them. Our results will be useful for accurately comparing N-body simulations to observational data of pairwise velocities.

  1. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  2. Backscatter in Large-Scale Flows

    NASA Astrophysics Data System (ADS)

    Nadiga, Balu

    2009-11-01

    Downgradient mixing of potential-voriticity and its variants are commonly employed to model the effects of unresolved geostrophic turbulence on resolved scales. This is motivated by the (inviscid and unforced) particle-wise conservation of potential-vorticity and the mean forward or down-scale cascade of potential enstrophy in geostrophic turubulence. By examining the statistical distribution of the transfer of potential enstrophy from mean or filtered motions to eddy or sub-filter motions, we find that the mean forward cascade results from the forward-scatter being only slightly greater than the backscatter. Downgradient mixing ideas, do not recognize such equitable mean-eddy or large scale-small scale interactions and consequently model only the mean effect of forward cascade; the importance of capturing the effects of backscatter---the forcing of resolved scales by unresolved scales---are only beginning to be recognized. While recent attempts to model the effects of backscatter on resolved scales have taken a stochastic approach, our analysis suggests that these effects are amenable to being modeled deterministically.

  3. Large scale molecular simulations of nanotoxicity.

    PubMed

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells.

  4. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  5. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  6. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  7. Pair correlation function integrals: Computation and use

    NASA Astrophysics Data System (ADS)

    Wedberg, Rasmus; O'Connell, John P.; Peters, Günther H.; Abildskov, Jens

    2011-08-01

    We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O'Connell, G. H. Peters, and J. Abildskov, Mol. Simul. 36, 1243 (2010);, 10.1080/08927020903536366 Fluid Phase Equilib. 302, 32 (2011)], 10.1016/j.fluid.2010.10.004, but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial distribution function has structure beyond the sampling limit imposed by the system size, the integration is more reliable, and usually more accurate, than simple integral truncation.

  8. Identification of Extremely Large Scale Structures in SDSS-III

    NASA Astrophysics Data System (ADS)

    Sankhyayan, Shishir; Bagchi, J.; Sarkar, P.; Sahni, V.; Jacob, J.

    2016-10-01

    We have initiated the search and detailed study of large scale structures present in the universe using galaxy redshift surveys. In this process, we take the volume-limited sample of galaxies from Sloan Digital Sky Survey III and find very large structures even beyond the redshift of 0.2. One of the structures is even greater than 600 Mpc which raises a question on the homogeneity scale of the universe. The shapes of voids-structures (adjacent to each other) seem to be correlated, which supports the physical existence of the observed structures. The other observational supports include galaxy clusters' and QSO distribution's correlation with the density peaks of the volume limited sample of galaxies.

  9. Nonzero Density-Velocity Consistency Relations for Large Scale Structures.

    PubMed

    Rizzo, Luca Alberto; Mota, David F; Valageas, Patrick

    2016-08-19

    We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias. PMID:27588842

  10. Nonzero Density-Velocity Consistency Relations for Large Scale Structures

    NASA Astrophysics Data System (ADS)

    Rizzo, Luca Alberto; Mota, David F.; Valageas, Patrick

    2016-08-01

    We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias.

  11. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks

    PubMed Central

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators’ careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system

  12. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    PubMed

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system

  13. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    PubMed

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system

  14. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  15. The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation

    NASA Astrophysics Data System (ADS)

    Noh, Yookyung

    The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.

  16. Triplet correlation functions in liquid water

    SciTech Connect

    Dhabal, Debdas; Chakravarty, Charusita; Singh, Murari; Wikfeldt, Kjartan Thor

    2014-11-07

    Triplet correlations have been shown to play a crucial role in the transformation of simple liquids to anomalous tetrahedral fluids [M. Singh, D. Dhabal, A. H. Nguyen, V. Molinero, and C. Chakravarty, Phys. Rev. Lett. 112, 147801 (2014)]. Here we examine triplet correlation functions for water, arguably the most important tetrahedral liquid, under ambient conditions, using configurational ensembles derived from molecular dynamics (MD) simulations and reverse Monte Carlo (RMC) datasets fitted to experimental scattering data. Four different RMC data sets with widely varying hydrogen-bond topologies fitted to neutron and x-ray scattering data are considered [K. T. Wikfeldt, M. Leetmaa, M. P. Ljungberg, A. Nilsson, and L. G. M. Pettersson, J. Phys. Chem. B 113, 6246 (2009)]. Molecular dynamics simulations are performed for two rigid-body effective pair potentials (SPC/E and TIP4P/2005) and the monatomic water (mW) model. Triplet correlation functions are compared with other structural measures for tetrahedrality, such as the O–O–O angular distribution function and the local tetrahedral order distributions. In contrast to the pair correlation functions, which are identical for all the RMC ensembles, the O–O–O triplet correlation function can discriminate between ensembles with different degrees of tetrahedral network formation with the maximally symmetric, tetrahedral SYM dataset displaying distinct signatures of tetrahedrality similar to those obtained from atomistic simulations of the SPC/E model. Triplet correlations from the RMC datasets conform closely to the Kirkwood superposition approximation, while those from MD simulations show deviations within the first two neighbour shells. The possibilities for experimental estimation of triplet correlations of water and other tetrahedral liquids are discussed.

  17. Triplet correlation functions in liquid water

    NASA Astrophysics Data System (ADS)

    Dhabal, Debdas; Singh, Murari; Wikfeldt, Kjartan Thor; Chakravarty, Charusita

    2014-11-01

    Triplet correlations have been shown to play a crucial role in the transformation of simple liquids to anomalous tetrahedral fluids [M. Singh, D. Dhabal, A. H. Nguyen, V. Molinero, and C. Chakravarty, Phys. Rev. Lett. 112, 147801 (2014)]. Here we examine triplet correlation functions for water, arguably the most important tetrahedral liquid, under ambient conditions, using configurational ensembles derived from molecular dynamics (MD) simulations and reverse Monte Carlo (RMC) datasets fitted to experimental scattering data. Four different RMC data sets with widely varying hydrogen-bond topologies fitted to neutron and x-ray scattering data are considered [K. T. Wikfeldt, M. Leetmaa, M. P. Ljungberg, A. Nilsson, and L. G. M. Pettersson, J. Phys. Chem. B 113, 6246 (2009)]. Molecular dynamics simulations are performed for two rigid-body effective pair potentials (SPC/E and TIP4P/2005) and the monatomic water (mW) model. Triplet correlation functions are compared with other structural measures for tetrahedrality, such as the O-O-O angular distribution function and the local tetrahedral order distributions. In contrast to the pair correlation functions, which are identical for all the RMC ensembles, the O-O-O triplet correlation function can discriminate between ensembles with different degrees of tetrahedral network formation with the maximally symmetric, tetrahedral SYM dataset displaying distinct signatures of tetrahedrality similar to those obtained from atomistic simulations of the SPC/E model. Triplet correlations from the RMC datasets conform closely to the Kirkwood superposition approximation, while those from MD simulations show deviations within the first two neighbour shells. The possibilities for experimental estimation of triplet correlations of water and other tetrahedral liquids are discussed.

  18. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  19. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  20. Complex modular structure of large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  1. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  2. Locality of correlation in density functional theory.

    PubMed

    Burke, Kieron; Cancio, Antonio; Gould, Tim; Pittalis, Stefano

    2016-08-01

    The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that EC → -AC ZlnZ + BCZ as Z → ∞, where Z is the atomic number, AC is known, and we estimate BC to be about 37 mhartree. The local density approximation yields AC exactly, but a very incorrect value for BC, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with BC a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed. PMID:27497544

  3. Locality of correlation in density functional theory

    NASA Astrophysics Data System (ADS)

    Burke, Kieron; Cancio, Antonio; Gould, Tim; Pittalis, Stefano

    2016-08-01

    The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that EC → -AC ZlnZ + BCZ as Z → ∞, where Z is the atomic number, AC is known, and we estimate BC to be about 37 mhartree. The local density approximation yields AC exactly, but a very incorrect value for BC, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with BC a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed.

  4. Locality of correlation in density functional theory.

    PubMed

    Burke, Kieron; Cancio, Antonio; Gould, Tim; Pittalis, Stefano

    2016-08-01

    The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that EC → -AC ZlnZ + BCZ as Z → ∞, where Z is the atomic number, AC is known, and we estimate BC to be about 37 mhartree. The local density approximation yields AC exactly, but a very incorrect value for BC, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with BC a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed.

  5. Strong CP Violation in Large Scale Magnetic Fields

    SciTech Connect

    Faccioli, P.; Millo, R.

    2007-11-19

    We explore the possibility of improving on the present experimental bounds on Strong CP violation, by studying processes in which the smallness of {theta} is compensated by the presence of some other very large scale. In particular, we study the response of the {theta} vacuum to large-scale magnetic fields, whose correlation lengths can be as large as the size of galaxy clusters. We find that, if strong interactions break CP, an external magnetic field would induce an electric vacuum polarization along the same direction. As a consequence, u,d-bar and d,u-bar quarks would accumulate in the opposite regions of the space, giving raise to an electric dipole moment. We estimate the magnitude of this effect both at T = 0 and for 0

  6. Systematic renormalization of the effective theory of Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-05-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  7. Very large-scale motions in a turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Lee, Jae Hwa; Jang, Seong Jae; Sung, Hyung Jin

    2011-11-01

    Direct numerical simulation of a turbulent pipe flow with ReD=35000 was performed to investigate the spatially coherent structures associated with very large-scale motions. The corresponding friction Reynolds number, based on pipe radius R, is R+=934, and the computational domain length is 30 R. The computed mean flow statistics agree well with previous DNS data at ReD=44000 and 24000. Inspection of the instantaneous fields and two-point correlation of the streamwise velocity fluctuations showed that the very long meandering motions exceeding 25R exist in logarithmic and wake regions, and the streamwise length scale is almost linearly increased up to y/R ~0.3, while the structures in the turbulent boundary layer only reach up to the edge of the log-layer. Time-resolved instantaneous fields revealed that the hairpin packet-like structures grow with continuous stretching along the streamwise direction and create the very large-scale structures with meandering in the spanwise direction, consistent with the previous conceptual model of Kim & Adrian (1999). This work was supported by the Creative Research Initiatives of NRF/MEST of Korea (No. 2011-0000423).

  8. Large-scale network-level processes during entrainment

    PubMed Central

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-01-01

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4–30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band “disconnecting” visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  9. Large-scale network-level processes during entrainment.

    PubMed

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-03-15

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4-30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band "disconnecting" visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  10. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  11. Correlation Functions Aid Analyses Of Spectra

    NASA Technical Reports Server (NTRS)

    Beer, Reinhard; Norton, Robert H., Jr.

    1989-01-01

    New uses found for correlation functions in analyses of spectra. In approach combining elements of both pattern-recognition and traditional spectral-analysis techniques, spectral lines identified in data appear useless at first glance because they are dominated by noise. New approach particularly useful in measurement of concentrations of rare species of molecules in atmosphere.

  12. Large-scale brain networks are distinctly affected in right and left mesial temporal lobe epilepsy.

    PubMed

    de Campos, Brunno Machado; Coan, Ana Carolina; Lin Yasuda, Clarissa; Casseb, Raphael Fernandes; Cendes, Fernando

    2016-09-01

    Mesial temporal lobe epilepsy (MTLE) with hippocampus sclerosis (HS) is associated with functional and structural alterations extending beyond the temporal regions and abnormal pattern of brain resting state networks (RSNs) connectivity. We hypothesized that the interaction of large-scale RSNs is differently affected in patients with right- and left-MTLE with HS compared to controls. We aimed to determine and characterize these alterations through the analysis of 12 RSNs, functionally parceled in 70 regions of interest (ROIs), from resting-state functional-MRIs of 99 subjects (52 controls, 26 right- and 21 left-MTLE patients with HS). Image preprocessing and statistical analysis were performed using UF(2) C-toolbox, which provided ROI-wise results for intranetwork and internetwork connectivity. Intranetwork abnormalities were observed in the dorsal default mode network (DMN) in both groups of patients and in the posterior salience network in right-MTLE. Both groups showed abnormal correlation between the dorsal-DMN and the posterior salience, as well as between the dorsal-DMN and the executive-control network. Patients with left-MTLE also showed reduced correlation between the dorsal-DMN and visuospatial network and increased correlation between bilateral thalamus and the posterior salience network. The ipsilateral hippocampus stood out as a central area of abnormalities. Alterations on left-MTLE expressed a low cluster coefficient, whereas the altered connections on right-MTLE showed low cluster coefficient in the DMN but high in the posterior salience regions. Both right- and left-MTLE patients with HS have widespread abnormal interactions of large-scale brain networks; however, all parameters evaluated indicate that left-MTLE has a more intricate bihemispheric dysfunction compared to right-MTLE. Hum Brain Mapp 37:3137-3152, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  13. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  14. Statistics of Caustics in Large-Scale Structure Formation

    NASA Astrophysics Data System (ADS)

    Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien

    2016-10-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  15. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  16. Large-scale magnetic fields in magnetohydrodynamic turbulence.

    PubMed

    Alexakis, Alexandros

    2013-02-22

    High Reynolds number magnetohydrodynamic turbulence in the presence of zero-flux large-scale magnetic fields is investigated as a function of the magnetic field strength. For a variety of flow configurations, the energy dissipation rate [symbol: see text] follows the scaling [Symbol: see text] proportional U(rms)(3)/ℓ even when the large-scale magnetic field energy is twenty times larger than the kinetic energy. A further increase of the magnetic energy showed a transition to the [Symbol: see text] proportional U(rms)(2) B(rms)/ℓ scaling implying that magnetic shear becomes more efficient at this point at cascading the energy than the velocity fluctuations. Strongly helical configurations form nonturbulent helicity condensates that deviate from these scalings. Weak turbulence scaling was absent from the investigation. Finally, the magnetic energy spectra support the Kolmogorov spectrum k(-5/3) while kinetic energy spectra are closer to the Iroshnikov-Kraichnan spectrum k(-3/2) as observed in the solar wind.

  17. Halo detection via large-scale Bayesian inference

    NASA Astrophysics Data System (ADS)

    Merson, Alexander I.; Jasche, Jens; Abdalla, Filipe B.; Lahav, Ofer; Wandelt, Benjamin; Jones, D. Heath; Colless, Matthew

    2016-08-01

    We present a proof-of-concept of a novel and fully Bayesian methodology designed to detect haloes of different masses in cosmological observations subject to noise and systematic uncertainties. Our methodology combines the previously published Bayesian large-scale structure inference algorithm, HAmiltonian Density Estimation and Sampling algorithm (HADES), and a Bayesian chain rule (the Blackwell-Rao estimator), which we use to connect the inferred density field to the properties of dark matter haloes. To demonstrate the capability of our approach, we construct a realistic galaxy mock catalogue emulating the wide-area 6-degree Field Galaxy Survey, which has a median redshift of approximately 0.05. Application of HADES to the catalogue provides us with accurately inferred three-dimensional density fields and corresponding quantification of uncertainties inherent to any cosmological observation. We then use a cosmological simulation to relate the amplitude of the density field to the probability of detecting a halo with mass above a specified threshold. With this information, we can sum over the HADES density field realisations to construct maps of detection probabilities and demonstrate the validity of this approach within our mock scenario. We find that the probability of successful detection of haloes in the mock catalogue increases as a function of the signal to noise of the local galaxy observations. Our proposed methodology can easily be extended to account for more complex scientific questions and is a promising novel tool to analyse the cosmic large-scale structure in observations.

  18. Correlated Strength in the Nuclear Spectral Function

    SciTech Connect

    D. Rohe; C. S. Armstrong; R. Asaturyan; O. K. Baker; S. Bueltmann; C. Carasco; D. Day; R. Ent; H. C. Fenker; K. Garrow; A. Gasparian; P. Gueye; M. Hauger; A. Honegger; J. Jourdan; C. E. Keppel; G. Kubon; R. Lindgren; A. Lung; D. J. Mack; J. H. Mitchell; H. Mkrtchyan; D. Mocelj; K. Normand; T. Petitjean; O. Rondon; E. Segbefia; I. Sick; S. Stepanyan; L. Tang; F. Tiefenbacher; W. F. Vulcan; G. Warren; S. A. Wood; L. Yuan; M. Zeier; H. Zhu; B. Zihlmann

    2004-10-01

    We have carried out an (e,ep) experiment at high momentum transfer and in parallel kinematics to measure the strength of the nuclear spectral function S(k,E) at high nucleon momenta k and large removal energies E. This strength is related to the presence of short-range and tensor correlations, and was known hitherto only indirectly and with considerable uncertainty from the lack of strength in the independent-particle region. This experiment locates by direct measurement the correlated strength predicted by theory.

  19. Correlates of functional capacity among centenarians.

    PubMed

    Martin, Peter; MacDonald, Maurice; Margrett, Jennifer; Siegler, Ilene; Poon, Leonard W; Jazwinski, S M; Green, R C; Gearing, M; Markesbery, W R; Woodard, J L; Johnson, M A; Tenover, J S; Rodgers, W L; Hausman, D B; Rott, C; Davey, A; Arnold, J

    2013-04-01

    This study investigated correlates of functional capacity among participants of the Georgia Centenarian Study. Six domains (demographics and health, positive and negative affect, personality, social and economic support, life events and coping, distal influences) were related to functional capacity for 234 centenarians and near centenarians (i.e., 98 years and older). Data were provided by proxy informants. Domain-specific multiple regression analyses suggested that younger centenarians, those living in the community and rated to be in better health were more likely to have higher functional capacity scores. Higher scores in positive affect, conscientiousness, social provisions, religious coping, and engaged lifestyle were also associated with higher levels of functional capacity. The results suggest that functional capacity levels continue to be associated with age after 100 years of life and that positive affect levels and past lifestyle activities as reported by proxies are salient factors of adaptation in very late life.

  20. Hierarchical features of large-scale cortical connectivity

    NASA Astrophysics Data System (ADS)

    da F. Costa, L.; Sporns, O.

    2005-12-01

    The analysis of complex networks has revealed patterns of organization in a variety of natural and artificial systems, including neuronal networks of the brain at multiple scales. In this paper, we describe a novel analysis of the large-scale connectivity between regions of the mammalian cerebral cortex, utilizing a set of hierarchical measurements proposed recently. We examine previously identified functional clusters of brain regions in macaque visual cortex and cat cortex and find significant differences between such clusters in terms of several hierarchical measures, revealing differences in how these clusters are embedded in the overall cortical architecture. For example, the ventral cluster of visual cortex maintains structurally more segregated, less divergent connections than the dorsal cluster, which may point to functionally different roles of their constituent brain regions.

  1. Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick

    2016-05-01

    The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a few orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. The method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.

  2. How Large Scale Flows in the Solar Convection Zone may Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun s magnetic activity cycle. Differential rotation can amplify the magnetic field and convert poloidal fields into toroidal fields. Poleward meridional flow near the surface can carry magnetic flux that reverses the magnetic poles and can convert toroidal fields into poloidal fields. The deeper, equatorward meridional flow can carry magnetic flux toward the equator where it can reconnect with oppositely directed fields in the other hemisphere. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun s rotation on convection produce velocity correlations that can maintain the differential rotation and meridional circulation. These convective motions can influence solar activity themselves by shaping the large-scale magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  3. Bias to CMB lensing measurements from the bispectrum of large-scale structure

    NASA Astrophysics Data System (ADS)

    Böhm, Vanessa; Schmittfull, Marcel; Sherwin, Blake D.

    2016-08-01

    The rapidly improving precision of measurements of gravitational lensing of the cosmic microwave background (CMB) also requires a corresponding increase in the precision of theoretical modeling. A commonly made approximation is to model the CMB deflection angle or lensing potential as a Gaussian random field. In this paper, however, we analytically quantify the influence of the non-Gaussianity of large-scale structure (LSS) lenses, arising from nonlinear structure formation, on CMB lensing measurements. In particular, evaluating the impact of the nonzero bispectrum of large-scale structure on the relevant CMB four-point correlation functions, we find that there is a bias to estimates of the CMB lensing power spectrum. For temperature-based lensing reconstruction with CMB stage III and stage IV experiments, we find that this lensing power spectrum bias is negative and is of order 1% of the signal. This corresponds to a shift of multiple standard deviations for these upcoming experiments. We caution, however, that our numerical calculation only evaluates two of the largest bias terms and, thus, only provides an approximate estimate of the full bias. We conclude that further investigation into lensing biases from nonlinear structure formation is required and that these biases should be accounted for in future lensing analyses.

  4. Large-scale structure in a texture-seeded cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Park, Changbom; Spergel, David N.; Turok, Nail

    1991-01-01

    This paper studies the formation of large-scale structure by global texture in a flat universe dominated by cold dark matter. A code for evolution of the texture fields was combined with an N-body code for evolving the dark matter. The results indicate some promising aspects: with only one free parameter, the observed galaxy-galaxy correlation function is reproduced, clusters of galaxies are found to be significantly clustered on a scale of 20-50/h Mpc, and coherent structures of over 50/h Mpc in the galaxy distribution were found. The large-scale streaming motions observed are in good agreement with the observations: the average magnitude of the velocity field smoothed over 30/h Mpc is 430 km/sec. Global texture produces a cosmic Mach number that is compatible with observation. Also, significant evolution of clusters at low redshift was seen. Possible problems for the theory include too high velocity dispersions in clusters, and voids which are not as empty as those observed.

  5. Large-Scale Brain Network Coupling Predicts Total Sleep Deprivation Effects on Cognitive Capacity

    PubMed Central

    Wang, Lubin; Zhai, Tianye; Zou, Feng; Ye, Enmao; Jin, Xiao; Li, Wuju; Qi, Jianlin; Yang, Zheng

    2015-01-01

    Interactions between large-scale brain networks have received most attention in the study of cognitive dysfunction of human brain. In this paper, we aimed to test the hypothesis that the coupling strength of large-scale brain networks will reflect the pressure for sleep and will predict cognitive performance, referred to as sleep pressure index (SPI). Fourteen healthy subjects underwent this within-subject functional magnetic resonance imaging (fMRI) study during rested wakefulness (RW) and after 36 h of total sleep deprivation (TSD). Self-reported scores of sleepiness were higher for TSD than for RW. A subsequent working memory (WM) task showed that WM performance was lower after 36 h of TSD. Moreover, SPI was developed based on the coupling strength of salience network (SN) and default mode network (DMN). Significant increase of SPI was observed after 36 h of TSD, suggesting stronger pressure for sleep. In addition, SPI was significantly correlated with both the visual analogue scale score of sleepiness and the WM performance. These results showed that alterations in SN-DMN coupling might be critical in cognitive alterations that underlie the lapse after TSD. Further studies may validate the SPI as a potential clinical biomarker to assess the impact of sleep deprivation. PMID:26218521

  6. Radially dependent large-scale dynamos in global cylindrical shear flows and the local cartesian limit

    NASA Astrophysics Data System (ADS)

    Ebrahimi, F.; Blackman, E. G.

    2016-06-01

    For cylindrical differentially rotating plasmas, we study large-scale magnetic field generation from finite amplitude non-axisymmetric perturbations by comparing numerical simulations with quasi-linear analytic theory. When initiated with a vertical magnetic field of either zero or finite net flux, our global cylindrical simulations exhibit the magnetorotational instability (MRI) and large-scale dynamo growth of radially alternating mean fields, averaged over height and azimuth. This dynamo growth is explained by our analytic calculations of a non-axisymmetric fluctuation-induced electromotive force that is sustained by azimuthal shear of the fluctuating fields. The standard `Ω effect' (shear of the mean field by differential rotation) is unimportant. For the MRI case, we express the large-scale dynamo field as a function of differential rotation. The resulting radially alternating large-scale fields may have implications for angular momentum transport in discs and corona. To connect with previous work on large-scale dynamos with local linear shear and identify the minimum conditions needed for large-scale field growth, we also solve our equations in local Cartesian coordinates. We find that large-scale dynamo growth in a linear shear flow without rotation can be sustained by shear plus non-axisymmetric fluctuations - even if not helical, a seemingly previously unidentified distinction. The linear shear flow dynamo emerges as a more restricted version of our more general new global cylindrical calculations.

  7. Large-scale dimension densities for heart rate variability analysis

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen

    2006-04-01

    In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (ρlsμ=0.97±0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65±0.13 ; EH, 0.54±0.05 ; YH, 0.57±0.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ρlsμ (day, 0.65±0.13 ; night, 0.66±0.12 ; n.s.) in contrast to healthy controls (day, 0.54±0.05 ; night, 0.61±0.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.

  8. Quasars as a Tracer of Large-scale Structures in the Distant Universe

    NASA Astrophysics Data System (ADS)

    Song, Hyunmi; Park, Changbom; Lietzen, Heidi; Einasto, Maret

    2016-08-01

    We study the dependence of the number density and properties of quasars on the background galaxy density using the currently largest spectroscopic data sets of quasars and galaxies. We construct a galaxy number density field smoothed over the variable smoothing scale of between approximately 10 and 20 h -1 Mpc over the redshift range 0.46 < z < 0.59 using the Sloan Digital Sky Survey (SDSS) Data Release 12 (DR12) Constant MASS galaxies. The quasar sample is prepared from the SDSS-I/II DR7. We examine the correlation of incidence of quasars with the large-scale background density and the dependence of quasar properties such as bolometric luminosity, black hole mass, and Eddington ratio on the large-scale density. We find a monotonic correlation between the quasar number density and large-scale galaxy number density, which is fitted well with a power-law relation, {n}Q\\propto {ρ }G0.618. We detect weak dependences of quasar properties on the large-scale density such as a positive correlation between black hole mass and density, and a negative correlation between luminosity and density. We discuss the possibility of using quasars as a tracer of large-scale structures at high redshifts, which may be useful for studies of the growth of structures in the high-redshift universe.

  9. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  10. Pass-transistor very large scale integration

    NASA Technical Reports Server (NTRS)

    Maki, Gary K. (Inventor); Bhatia, Prakash R. (Inventor)

    2004-01-01

    Logic elements are provided that permit reductions in layout size and avoidance of hazards. Such logic elements may be included in libraries of logic cells. A logical function to be implemented by the logic element is decomposed about logical variables to identify factors corresponding to combinations of the logical variables and their complements. A pass transistor network is provided for implementing the pass network function in accordance with this decomposition. The pass transistor network includes ordered arrangements of pass transistors that correspond to the combinations of variables and complements resulting from the logical decomposition. The logic elements may act as selection circuits and be integrated with memory and buffer elements.

  11. Characterizing unknown systematics in large scale structure surveys

    SciTech Connect

    Agarwal, Nishant; Ho, Shirley; Myers, Adam D.; Seo, Hee-Jong; Ross, Ashley J.; Bahcall, Neta; Brinkmann, Jonathan; Eisenstein, Daniel J.; Muna, Demitri; Palanque-Delabrouille, Nathalie; Yèche, Christophe; Petitjean, Patrick; Schneider, Donald P.; Streblyanska, Alina; Weaver, Benjamin A.

    2014-04-01

    Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data, we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.

  12. Large-scale regional comparisons of ecosystem processes: Methods and approaches

    NASA Astrophysics Data System (ADS)

    Legendre, Louis; Niquil, Nathalie

    2013-01-01

    Large-scale regional marine ecosystems can be compared for various processes that include their structure and biodiversity, functioning, services, and effects on biogeochemical processes. The comparisons can proceed from data up, or from conceptual models down, or from a combination of models and data. This study proposes a typology of methods and approaches that are currently used, or could possibly be used for making large-scale ecosystem comparisons. The various methods and approaches are illustrated with examples drawn from the literature.

  13. Correlation functions for describing and reconstructing soil microstructure: the use of directional correlation functions

    NASA Astrophysics Data System (ADS)

    Karsanina, Marina; Gerke, Kirill; Skvortsova, Elena; Mallants, Dirk

    2015-04-01

    Structural features of porous materials define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, gas exchange between biologically active soil root zone and atmosphere, etc.) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and grain-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: (i) two-point probability functions, (ii) linear functions, and (iii) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity (i.e. superpositions of pores and microcracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, including for soil classification, pore

  14. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  15. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  16. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    SciTech Connect

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinate with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.

  17. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    DOE PAGES

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less

  18. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  19. Large-scale asymmetric synthesis of a cathepsin S inhibitor.

    PubMed

    Lorenz, Jon C; Busacca, Carl A; Feng, XuWu; Grinberg, Nelu; Haddad, Nizar; Johnson, Joe; Kapadia, Suresh; Lee, Heewon; Saha, Anjan; Sarvestani, Max; Spinelli, Earl M; Varsolona, Rich; Wei, Xudong; Zeng, Xingzhong; Senanayake, Chris H

    2010-02-19

    A potent reversible inhibitor of the cysteine protease cathepsin-S was prepared on large scale using a convergent synthetic route, free of chromatography and cryogenics. Late-stage peptide coupling of a chiral urea acid fragment with a functionalized aminonitrile was employed to prepare the target, using 2-hydroxypyridine as a robust, nonexplosive replacement for HOBT. The two key intermediates were prepared using a modified Strecker reaction for the aminonitrile and a phosphonation-olefination-rhodium-catalyzed asymmetric hydrogenation sequence for the urea. A palladium-catalyzed vinyl transfer coupled with a Claisen reaction was used to produce the aldehyde required for the side chain. Key scale up issues, safety calorimetry, and optimization of all steps for multikilogram production are discussed. PMID:20102230

  20. Large-scale identification of yeast integral membrane protein interactions

    PubMed Central

    Miller, John P.; Lo, Russell S.; Ben-Hur, Asa; Desmarais, Cynthia; Stagljar, Igor; Noble, William Stafford; Fields, Stanley

    2005-01-01

    We carried out a large-scale screen to identify interactions between integral membrane proteins of Saccharomyces cerevisiae by using a modified split-ubiquitin technique. Among 705 proteins annotated as integral membrane, we identified 1,985 putative interactions involving 536 proteins. To ascribe confidence levels to the interactions, we used a support vector machine algorithm to classify interactions based on the assay results and protein data derived from the literature. Previously identified and computationally supported interactions were used to train the support vector machine, which identified 131 interactions of highest confidence, 209 of the next highest confidence, 468 of the next highest, and the remaining 1,085 of low confidence. This study provides numerous putative interactions among a class of proteins that have been difficult to analyze on a high-throughput basis by other approaches. The results identify potential previously undescribed components of established biological processes and roles for integral membrane proteins of ascribed functions. PMID:16093310

  1. Dark matter, long-range forces, and large-scale structure

    NASA Technical Reports Server (NTRS)

    Gradwohl, Ben-Ami; Frieman, Joshua A.

    1992-01-01

    If the dark matter in galaxies and clusters is nonbaryonic, it can interact with additional long-range fields that are invisible to experimental tests of the equivalence principle. We discuss the astrophysical and cosmological implications of a long-range force coupled only to the dark matter and find rather tight constraints on its strength. If the force is repulsive (attractive), the masses of galaxy groups and clusters (and the mean density of the universe inferred from them) have been systematically underestimated (overestimated). We explore the consequent effects on the two-point correlation function, large-scale velocity flows, and microwave background anisotropies, for models with initial scale-invariant adiabatic perturbations and cold dark matter.

  2. Large-scale quantum mechanical simulations of high-Z metals

    SciTech Connect

    Yang, L H; Hood, R; Pask, J; Klepeis, J

    2007-01-03

    High-Z metals constitute a particular challenge for large-scale ab initio calculations, as they require high resolution due to the presence of strongly localized states and require many eigenstates to be computed due to the large number of electrons and need to accurately resolve the Fermi surface. Here, we report recent findings on high-Z materials, using an efficient massively parallel planewave implementation on some of the largest computational architectures currently available. We discuss the particular architectures employed and methodological advances required to harness them effectively. We present a pair-correlation function for U, calculated using quantum molecular dynamics, and discuss relaxations of Pu atoms in the vicinity of defects in aged and alloyed Pu. We find that the self-irradiation associated with aging has a negligible effect on the compressibility of Pu relative to other factors such as alloying.

  3. The trispectrum in the Effective Field Theory of Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Bertolini, Daniele; Schutz, Katelin; Solon, Mikhail P.; Zurek, Kathryn M.

    2016-06-01

    We compute the connected four point correlation function (the trispectrum in Fourier space) of cosmological density perturbations at one-loop order in Standard Perturbation Theory (SPT) and the Effective Field Theory of Large Scale Structure (EFT of LSS). This paper is a companion to our earlier work on the non-Gaussian covariance of the matter power spectrum, which corresponds to a particular wavenumber configuration of the trispectrum. In the present calculation, we highlight and clarify some of the subtle aspects of the EFT framework that arise at third order in perturbation theory for general wavenumber configurations of the trispectrum. We consistently incorporate vorticity and non-locality in time into the EFT counterterms and lay out a complete basis of building blocks for the stress tensor. We show predictions for the one-loop SPT trispectrum and the EFT contributions, focusing on configurations which have particular relevance for using LSS to constrain primordial non-Gaussianity.

  4. Correlation functions in conformal invariant stochastic processes

    NASA Astrophysics Data System (ADS)

    Alcaraz, Francisco C.; Rittenberg, Vladimir

    2015-11-01

    We consider the problem of correlation functions in the stationary states of one-dimensional stochastic models having conformal invariance. If one considers the space dependence of the correlators, the novel aspect is that although one considers systems with periodic boundary conditions, the observables are described by boundary operators. From our experience with equilibrium problems one would have expected bulk operators. Boundary operators have correlators having critical exponents being half of those of bulk operators. If one studies the space-time dependence of the two-point function, one has to consider one boundary and one bulk operators. The Raise and Peel model has conformal invariance as can be shown in the spin 1/2 basis of the Hamiltonian which gives the time evolution of the system. This is an XXZ quantum chain with twisted boundary condition and local interactions. This Hamiltonian is integrable and the spectrum is known in the finite-size scaling limit. In the stochastic base in which the process is defined, the Hamiltonian is not local anymore. The mapping into an SOS model, helps to define new local operators. As a byproduct some new properties of the SOS model are conjectured. The predictions of conformal invariance are discussed in the new framework and compared with Monte Carlo simulations.

  5. Cosmology from Cosmic Microwave Background and large- scale structure

    NASA Astrophysics Data System (ADS)

    Xu, Yongzhong

    2003-10-01

    This dissertation consists of a series of studies, constituting four published papers, involving the Cosmic Microwave Background and the large scale structure, which help constrain Cosmological parameters and potential systematic errors. First, we present a method for comparing and combining maps with different resolutions and beam shapes, and apply it to the Saskatoon, QMAP and COBE/DMR data sets. Although the Saskatoon and QMAP maps detect signal at the 21σ and 40σ, levels, respectively, their difference is consistent with pure noise, placing strong limits on possible systematic errors. In particular, we obtain quantitative upper limits on relative calibration and pointing errors. Splitting the combined data by frequency shows similar consistency between the Ka- and Q-bands, placing limits on foreground contamination. The visual agreement between the maps is equally striking. Our combined QMAP+Saskatoon map, nicknamed QMASK, is publicly available at www.hep.upenn.edu/˜xuyz/qmask.html together with its 6495 x 6495 noise covariance matrix. This thoroughly tested data set covers a large enough area (648 square degrees—at the time, the largest degree-scale map available) to allow a statistical comparison with LOBE/DMR, showing good agreement. By band-pass-filtering the QMAP and Saskatoon maps, we are also able to spatially compare them scale-by-scale to check for beam- and pointing-related systematic errors. Using the QMASK map, we then measure the cosmic microwave background (CMB) power spectrum on angular scales ℓ ˜ 30 200 (1° 6°), and we test it for non-Gaussianity using morphological statistics known as Minkowski functionals. We conclude that the QMASK map is neither a very typical nor a very exceptional realization of a Gaussian random field. At least about 20% of the 1000 Gaussian Monte Carlo maps differ more than the QMASK map from the mean morphological parameters of the Gaussian fields. Finally, we compute the real-space power spectrum and the

  6. An Empirical Relation between the Large-scale Magnetic Field and the Dynamical Mass in Galaxies

    NASA Astrophysics Data System (ADS)

    Tabatabaei, F. S.; Martinsson, T. P. K.; Knapen, J. H.; Beckman, J. E.; Koribalski, B.; Elmegreen, B. G.

    2016-02-01

    The origin and evolution of cosmic magnetic fields as well as the influence of the magnetic fields on the evolution of galaxies are unknown. Though not without challenges, the dynamo theory can explain the large-scale coherent magnetic fields that govern galaxies, but observational evidence for the theory is so far very scarce. Putting together the available data of non-interacting, non-cluster galaxies with known large-scale magnetic fields, we find a tight correlation between the integrated polarized flux density, SPI, and the rotation speed, vrot, of galaxies. This leads to an almost linear correlation between the large-scale magnetic field \\bar{B} and vrot, assuming that the number of cosmic-ray electrons is proportional to the star formation rate, and a super-linear correlation assuming equipartition between magnetic fields and cosmic rays. This correlation cannot be attributed to an active linear α-Ω dynamo, as no correlation holds with global shear or angular speed. It indicates instead a coupling between the large-scale magnetic field and the dynamical mass of the galaxies, \\bar{B}˜ \\{M}{{dyn}}0.25-0.4. Hence, faster rotating and/or more massive galaxies have stronger large-scale magnetic fields. The observed \\bar{B}-{v}{{rot}} correlation shows that the anisotropic turbulent magnetic field dominates \\bar{B} in fast rotating galaxies as the turbulent magnetic field, coupled with gas, is enhanced and ordered due to the strong gas compression and/or local shear in these systems. This study supports a stationary condition for the large-scale magnetic field as long as the dynamical mass of galaxies is constant.

  7. Probing Galactic Structure with the Spatial Correlation Function of SEGUE G-dwarf Stars

    NASA Astrophysics Data System (ADS)

    Mao, Qingqing; Berlind, Andreas A.; Holley-Bockelmann, Kelly; Schlesinger, Katharine; Johnson, Jennifer; Rockosi, Constance M.

    2015-01-01

    We apply a commonly-used tool in large scale structure surveys, the 3-dimensional two-point correlation function, to G dwarfs in the Milky Way in an effort to constrain Galactic structure and to search for statistically significant stellar clustering. Our G-dwarf sample is constructed from SDSS SEGUE data by Schlesinger et al. (2012). We find that the correlation function shape along individual SEGUE lines of sight depends sensitively on both the stellar density gradients and the survey geometry. By fitting mock measurements of smooth disk galaxy models to SEGUE data measurements, we obtain strong constraints on the thin and thick disk components of the Milky Way. We also find that the two smooth disks model cannot fully explain the SEGUE data, which indicates substructure on very small scales.

  8. Inflationary tensor fossils in large-scale structure

    SciTech Connect

    Dimastrogiovanni, Emanuela; Fasiello, Matteo; Jeong, Donghui; Kamionkowski, Marc E-mail: mrf65@case.edu E-mail: kamion@jhu.edu

    2014-12-01

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.

  9. Large-scale mouse knockouts and phenotypes.

    PubMed

    Ramírez-Solis, Ramiro; Ryder, Edward; Houghton, Richard; White, Jacqueline K; Bottomley, Joanna

    2012-01-01

    Standardized phenotypic analysis of mutant forms of every gene in the mouse genome will provide fundamental insights into mammalian gene function and advance human and animal health. The availability of the human and mouse genome sequences, the development of embryonic stem cell mutagenesis technology, the standardization of phenotypic analysis pipelines, and the paradigm-shifting industrialization of these processes have made this a realistic and achievable goal. The size of this enterprise will require global coordination to ensure economies of scale in both the generation and primary phenotypic analysis of the mutant strains, and to minimize unnecessary duplication of effort. To provide more depth to the functional annotation of the genome, effective mechanisms will also need to be developed to disseminate the information and resources produced to the wider community. Better models of disease, potential new drug targets with novel mechanisms of action, and completely unsuspected genotype-phenotype relationships covering broad aspects of biology will become apparent. To reach these goals, solutions to challenges in mouse production and distribution, as well as development of novel, ever more powerful phenotypic analysis modalities will be necessary. It is a challenging and exciting time to work in mouse genetics.

  10. Correlation functions for glass-forming systems

    PubMed

    Jacobs

    2000-07-01

    We present a simple, linear, partial-differential equation for the density-density correlation function in a glass-forming system. The equation is written down on the basis of fundamental and general considerations of linearity, symmetry, stability, thermodynamic irreversibility and consistency with the equation of continuity (i.e. , conservation of matter). The dynamical properties of the solutions show a change in behavior characteristic of the liquid-glass transition as a function of one of the parameters (temperature). The equation can be shown to lead to the simplest mode-coupling theory of glasses and provides a partial justification of this simplest theory. It provides also a method for calculating the space dependence of the correlation functions not available otherwise. The results suggest certain differences in behavior between glassy solids and glass-forming liquids which may be accessible to experiment. A brief discussion is presented of how the method can be applied to other systems such as sandpiles and vortex glasses in type II superconductors. PMID:11088609

  11. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  12. Episodic memory in aspects of large-scale brain networks

    PubMed Central

    Jeong, Woorim; Chung, Chun Kee; Kim, June Sic

    2015-01-01

    Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939

  13. Large-scale recording of astrocyte activity

    PubMed Central

    Nimmerjahn, Axel; Bergles, Dwight E.

    2015-01-01

    Astrocytes are highly ramified glial cells found throughout the central nervous system (CNS). They express a variety of neurotransmitter receptors that can induce widespread chemical excitation, placing these cells in an optimal position to exert global effects on brain physiology. However, the activity patterns of only a small fraction of astrocytes have been examined and techniques to manipulate their behavior are limited. As a result, little is known about how astrocytes modulate CNS function on synaptic, microcircuit, or systems levels. Here, we review current and emerging approaches for visualizing and manipulating astrocyte activity in vivo. Deciphering how astrocyte network activity is controlled in different physiological and pathological contexts is critical for defining their roles in the healthy and diseased CNS. PMID:25665733

  14. Large scale propagation intermittency in the atmosphere

    NASA Astrophysics Data System (ADS)

    Mehrabi, Ali

    2000-11-01

    Long-term (several minutes to hours) amplitude variations observed in outdoor sound propagation experiments at Disneyland, California, in February 1998 are explained in terms of a time varying index of refraction. The experimentally propagated acoustic signals were received and recorded at several locations ranging from 300 meters to 2,800 meters. Meteorological data was taken as a function of altitude simultaneously with the received signal levels. There were many barriers along the path of acoustic propagation that affected the received signal levels, especially at short ranges. In a downward refraction situation, there could be a random change of amplitude in the predicted signals. A computer model based on the Fast Field Program (FFP) was used to compute the signal loss at the different receiving locations and to verify that the variations in the received signal levels can be predicted numerically. The calculations agree with experimental data with the same trend variations in average amplitude.

  15. Sum rule of the correlation function

    SciTech Connect

    Maj, Radoslaw; Mrowczynski, Stanislaw

    2005-04-01

    We discuss a sum rule satisfied by the correlation function of two particles with small relative momenta. The sum rule, which results from the completeness condition of the quantum states of two particles, is derived and checked to see how it works in practice. The sum rule is shown to be trivially satisfied by free particle pairs. We then analyze three different systems of interacting particles: neutron and proton pairs in the s-wave approximation, the so-called hard spheres with phase shifts taken into account up to l=4, and finally, the Coulomb system of two charged particles.

  16. Meson's correlation functions in a nuclear medium

    NASA Astrophysics Data System (ADS)

    Park, Chanyong

    2016-09-01

    We investigate meson's spectrum, decay constant and form factor in a nuclear medium through holographic two- and three-point correlation functions. To describe a nuclear medium composed of protons and neutrons, we consider a hard wall model on the thermal charged AdS geometry and show that due to the isospin interaction with a nuclear medium, there exist splittings of the meson's spectrum, decay constant and form factor relying on the isospin charge. In addition, we show that the ρ-meson's form factor describing an interaction with pseudoscalar fluctuation decreases when the nuclear density increases, while the interaction with a longitudinal part of an axial vector meson increases.

  17. Large-scale magnetic fields, dark energy, and QCD

    SciTech Connect

    Urban, Federico R.; Zhitnitsky, Ariel R.

    2010-08-15

    Cosmological magnetic fields are being observed with ever increasing correlation lengths, possibly reaching the size of superclusters, therefore disfavoring the conventional picture of generation through primordial seeds later amplified by galaxy-bound dynamo mechanisms. In this paper we put forward a fundamentally different approach that links such large-scale magnetic fields to the cosmological vacuum energy. In our scenario the dark energy is due to the Veneziano ghost (which solves the U(1){sub A} problem in QCD). The Veneziano ghost couples through the triangle anomaly to the electromagnetic field with a constant which is unambiguously fixed in the standard model. While this interaction does not produce any physical effects in Minkowski space, it triggers the generation of a magnetic field in an expanding universe at every epoch. The induced energy of the magnetic field is thus proportional to cosmological vacuum energy: {rho}{sub EM{approx_equal}}B{sup 2{approx_equal}}(({alpha}/4{pi})){sup 2{rho}}{sub DE}, {rho}{sub DE} hence acting as a source for the magnetic energy {rho}{sub EM}. The corresponding numerical estimate leads to a magnitude in the nG range. There are two unique and distinctive predictions of our proposal: an uninterrupted active generation of Hubble size correlated magnetic fields throughout the evolution of the Universe; the presence of parity violation on the enormous scales 1/H, which apparently has been already observed in CMB. These predictions are entirely rooted into the standard model of particle physics.

  18. Safeguards instruments for Large-Scale Reprocessing Plants

    SciTech Connect

    Hakkila, E.A.; Case, R.S.; Sonnier, C.

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  19. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  20. In situ vitrification large-scale operational acceptance test analysis

    SciTech Connect

    Buelt, J.L.; Carter, J.G.

    1986-05-01

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack.

  1. Stochastic pattern transitions in large scale swarms

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira; Lindley, Brandon; Mier-Y-Teran, Luis

    2013-03-01

    We study the effects of time dependent noise and discrete, randomly distributed time delays on the dynamics of a large coupled system of self-propelling particles. Bifurcation analysis on a mean field approximation of the system reveals that the system possesses patterns with certain universal characteristics that depend on distinguished moments of the time delay distribution. We show both theoretically and numerically that although bifurcations of simple patterns, such as translations, change stability only as a function of the first moment of the time delay distribution, more complex bifurcating patterns depend on all of the moments of the delay distribution. In addition, we show that for sufficiently large values of the coupling strength and/or the mean time delay, there is a noise intensity threshold, dependent on the delay distribution width, that forces a transition of the swarm from a misaligned state into an aligned state. We show that this alignment transition exhibits hysteresis when the noise intensity is taken to be time dependent. Research supported by the Office of Naval Research

  2. Territorial Polymers and Large Scale Genome Organization

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander

    2012-02-01

    Chromatin fiber in interphase nucleus represents effectively a very long polymer packed in a restricted volume. Although polymer models of chromatin organization were considered, most of them disregard the fact that DNA has to stay not too entangled in order to function properly. One polymer model with no entanglements is the melt of unknotted unconcatenated rings. Extensive simulations indicate that rings in the melt at large length (monomer numbers) N approach the compact state, with gyration radius scaling as N^1/3, suggesting every ring being compact and segregated from the surrounding rings. The segregation is consistent with the known phenomenon of chromosome territories. Surface exponent β (describing the number of contacts between neighboring rings scaling as N^β) appears only slightly below unity, β 0.95. This suggests that the loop factor (probability to meet for two monomers linear distance s apart) should decay as s^-γ, where γ= 2 - β is slightly above one. The later result is consistent with HiC data on real human interphase chromosomes, and does not contradict to the older FISH data. The dynamics of rings in the melt indicates that the motion of one ring remains subdiffusive on the time scale well above the stress relaxation time.

  3. Detecting correlations among functional-sequence motifs

    NASA Astrophysics Data System (ADS)

    Pirino, Davide; Rigosa, Jacopo; Ledda, Alice; Ferretti, Luca

    2012-06-01

    Sequence motifs are words of nucleotides in DNA with biological functions, e.g., gene regulation. Identification of such words proceeds through rejection of Markov models on the expected motif frequency along the genome. Additional biological information can be extracted from the correlation structure among patterns of motif occurrences. In this paper a log-linear multivariate intensity Poisson model is estimated via expectation maximization on a set of motifs along the genome of E. coli K12. The proposed approach allows for excitatory as well as inhibitory interactions among motifs and between motifs and other genomic features like gene occurrences. Our findings confirm previous stylized facts about such types of interactions and shed new light on genome-maintenance functions of some particular motifs. We expect these methods to be applicable to a wider set of genomic features.

  4. Small-scale universality and large-scale diversity. Comment on "Drivers of structural features in gene regulatory networks: From biophysical constraints to biological function" by O.C. Martin, A. Krzywicki, and M. Zagorski

    NASA Astrophysics Data System (ADS)

    Ispolatov, Yaroslav

    2016-07-01

    Martin et al. undertook an arduous task of reviewing vast literature on evolution and functionality of directed biological networks and gene networks in particular. The literature is assessed addressing a question of whether a set of features particular for gene networks is repeatedly recreated among unrelated species driven by selection pressure or has evolved once and is being inherited. To argue for the former mechanism, Martin and colleagues explore the following examples: Scale-free out-degree distribution.

  5. Correlation functions in ω-deformed supergravity

    NASA Astrophysics Data System (ADS)

    Borghese, A.; Pang, Y.; Pope, C. N.; Sezgin, E.

    2015-02-01

    Gauged supergravity in four dimensions is now known to admit a deformation characterized by a real parameter ω lying in the interval 0 ≤ ω ≤ π/8. We analyse the fluctuations about its anti-de Sitter vacuum, and show that the full supersymmetry can be maintained by the boundary conditions only for ω = 0. For non-vanishing ω, and requiring that there be no propagating spin s > 1 fields on the boundary, we show that is the maximum degree of supersymmetry that can be preserved by the boundary conditions. We then construct in detail the consistent truncation of the theory to give ω-deformed SO(6) gauged supergravity, again with ω in the range 0 ≤ ω ≤ π/8. We show that this theory admits fully supersymmetry-preserving boundary conditions not only for ω = 0, but also for ω = π/8. These two theories are related by a U(1) electric-magnetic duality. We observe that the only three-point functions that depend on ω involve the coupling of an SO(6) gauge field with the U(1) gauge field and a scalar or pseudo-scalar field. We compute these correlation functions and compare them with those of the undeformed theory. We find that the correlation functions in the ω= π/8 theory holographically correspond to amplitudes in the U( N) k ×U( N)- k ABJM model in which the U(1) Noether current is replaced by a dynamical U(1) gauge field. We also show that the ω-deformed gauged supergravities can be obtained via consistent reductions from the eleven-dimensional or ten-dimensional type IIA supergravities.

  6. A coarse grained perturbation theory for the Large Scale Structure, with cosmology and time independence in the UV

    SciTech Connect

    Manzotti, Alessandro; Peloso, Marco; Pietroni, Massimo; Viel, Matteo; Villaescusa-Navarro, Francisco E-mail: peloso@physics.umn.edu E-mail: viel@oats.inaf.it

    2014-09-01

    Standard cosmological perturbation theory (SPT) for the Large Scale Structure (LSS) of the Universe fails at small scales (UV) due to strong nonlinearities and to multistreaming effects. In ref. [1] a new framework was proposed in which the large scales (IR) are treated perturbatively while the information on the UV, mainly small scale velocity dispersion, is obtained by nonlinear methods like N-body simulations. Here we develop this approach, showing that it is possible to reproduce the fully nonlinear power spectrum (PS) by combining a simple (and fast) 1-loop computation for the IR scales and the measurement of a single, dominant, correlator from N-body simulations for the UV ones. We measure this correlator for a suite of seven different cosmologies, and we show that its inclusion in our perturbation scheme reproduces the fully non-linear PS with percent level accuracy, for wave numbers up to k∼ 0.4 h Mpc{sup -1} down to 0z=. We then show that, once this correlator has been measured in a given cosmology, there is no need to run a new simulation for a different cosmology in the suite. Indeed, by rescaling this correlator by a proper function computable in SPT, the reconstruction procedure works also for the other cosmologies and for all redshifts, with comparable accuracy. Finally, we clarify the relation of this approach to the Effective Field Theory methods recently proposed in the LSS context.

  7. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  8. Interloper bias in future large-scale structure surveys

    NASA Astrophysics Data System (ADS)

    Pullen, Anthony R.; Hirata, Christopher M.; Doré, Olivier; Raccanelli, Alvise

    2016-02-01

    Next-generation spectroscopic surveys will map the large-scale structure of the observable universe, using emission line galaxies as tracers. While each survey will map the sky with a specific emission line, interloping emission lines can masquerade as the survey's intended emission line at different redshifts. Interloping lines from galaxies that are not removed can contaminate the power spectrum measurement, mixing correlations from various redshifts and diluting the true signal. We assess the potential for power spectrum contamination, finding that an interloper fraction worse than 0.2% could bias power spectrum measurements for future surveys by more than 10% of statistical errors, while also biasing power spectrum inferences. We also construct a formalism for predicting cosmological parameter measurement bias, demonstrating that a 0.15%-0.3% interloper fraction could bias the growth rate by more than 10% of the error, which can affect constraints on gravity from upcoming surveys. We use the COSMOS Mock Catalog (CMC), with the emission lines rescaled to better reproduce recent data, to predict potential interloper fractions for the Prime Focus Spectrograph (PFS) and the Wide-Field InfraRed Survey Telescope (WFIRST). We find that secondary line identification, or confirming galaxy redshifts by finding correlated emission lines, can remove interlopers for PFS. For WFIRST, we use the CMC to predict that the 0.2% target can be reached for the WFIRST Hα survey, but sensitive optical and near-infrared photometry will be required. For the WFIRST [O III] survey, the predicted interloper fractions reach several percent and their effects will have to be estimated and removed statistically (e.g., with deep training samples). These results are optimistic as the CMC does not capture the full set of correlations of galaxy properties in the real Universe, and they do not include blending effects. Mitigating interloper contamination will be crucial to the next generation of

  9. Structure–function correlations in tyrosinases

    PubMed Central

    Kanteev, Margarita; Goldfeder, Mor; Fishman, Ayelet

    2015-01-01

    Tyrosinases are metalloenzymes belonging to the type-3 copper protein family which contain two copper ions in the active site. They are found in various prokaryotes as well as in plants, fungi, arthropods, and mammals and are responsible for pigmentation, wound healing, radiation protection, and primary immune response. Tyrosinases perform two sequential enzymatic reactions: hydroxylation of monophenols and oxidation of diphenols to form quinones which polymerize spontaneously to melanin. Two other members of this family are catechol oxidases, which are prevalent mainly in plants and perform only the second oxidation step, and hemocyanins, which lack enzymatic activity and are oxygen carriers. In the last decade, several structures of plant and bacterial tyrosinases were determined, some with substrates or inhibitors, highlighting features and residues which are important for copper uptake and catalysis. This review summarizes the updated information on structure–function correlations in tyrosinases along with comparison to other type-3 copper proteins. PMID:26104241

  10. Really computing nonperturbative real time correlation functions

    NASA Astrophysics Data System (ADS)

    Bödeker, Dietrich; McLerran, Larry; Smilga, Andrei

    1995-10-01

    It has been argued by Grigoriev and Rubakov that one can simulate real time processes involving baryon number nonconservation at high temperature using real time evolution of classical equations, and summing over initial conditions with a classical thermal weight. It is known that such a naive algorithm is plagued by ultraviolet divergences. In quantum theory the divergences are regularized, but the corresponding graphs involve the contributions from the hard momentum region and also the new scale ~gT comes into play. We propose a modified algorithm which involves solving the classical equations of motion for the effective hard thermal loop Hamiltonian with an ultraviolet cutoff μ>>gT and integrating over initial conditions with a proper thermal weight. Such an algorithm should provide a determination of the infrared behavior of the real time correlation function T determining the baryon violation rate. Hopefully, the results obtained in this modified algorithm will be cutoff independent.

  11. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    SciTech Connect

    Lenormand, R.; Thiele, M.R.

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  12. Large-scale sequencing and analytical processing of ESTs.

    PubMed

    Mitreva, Makedonka; Mardis, Elaine R

    2009-01-01

    Expressed sequence tags (ESTs) have proven to be one of the most rapid and cost-effective routes to gene discovery for eukaryotic genomes. Furthermore, their multipurpose uses, such as in probe design for microarrays, determining alternative splicing, verifying open reading frames, and confirming exon/intron and gene boundaries, to name a few, further justify their inclusion in many genomic characterization projects. Hence, there has been a constant increase in the number of ESTs deposited into the dbEST division of GenBank. This trend also correlates to ever-improving molecular techniques for obtaining biological material, performing RNA extraction, and constructing cDNA libraries, and predominantly to ever-evolving sequencing chemistry and instrumentation, as well as to decreased sequencing costs. This chapter describes large-scale sequencing of ESTs on two distinct platforms: the ABI 3730xl and the 454 Life Sciences GS20 sequencers, and the corresponding processes of sequence extraction, processing, and submissions to public databases. While the conventional 3730xl sequencing process is described, starting with the plating of an already-existing cDNA library, the section on 454 GS20 pyrosequencing also includes a method for generating full-length cDNA sequences. With appropriate bioinformatics tools, each of these platforms either used independently or coupled together provide a powerful combination for comprehensive exploration of an organism's transcriptome.

  13. Large-scale Chromosomal Movements During Interphase Progression in Drosophila

    PubMed Central

    Csink, Amy K.; Henikoff, Steven

    1998-01-01

    We examined the effect of cell cycle progression on various levels of chromosome organization in Drosophila. Using bromodeoxyuridine incorporation and DNA quantitation in combination with fluorescence in situ hybridization, we detected gross chromosomal movements in diploid interphase nuclei of larvae. At the onset of S-phase, an increased separation was seen between proximal and distal positions of a long chromsome arm. Progression through S-phase disrupted heterochromatic associations that have been correlated with gene silencing. Additionally, we have found that large-scale G1 nuclear architecture is continually dynamic. Nuclei display a Rabl configuration for only ∼2 h after mitosis, and with further progression of G1-phase can establish heterochromatic interactions between distal and proximal parts of the chromosome arm. We also find evidence that somatic pairing of homologous chromosomes is disrupted during S-phase more rapidly for a euchromatic than for a heterochromatic region. Such interphase chromosome movements suggest a possible mechanism that links gene regulation via nuclear positioning to the cell cycle: delayed maturation of heterochromatin during G1-phase delays establishment of a silent chromatin state. PMID:9763417

  14. Ectopically tethered CP190 induces large-scale chromatin decondensation

    NASA Astrophysics Data System (ADS)

    Ahanger, Sajad H.; Günther, Katharina; Weth, Oliver; Bartkuhn, Marek; Bhonde, Ramesh R.; Shouche, Yogesh S.; Renkawitz, Rainer

    2014-01-01

    Insulator mediated alteration in higher-order chromatin and/or nucleosome organization is an important aspect of epigenetic gene regulation. Recent studies have suggested a key role for CP190 in such processes. In this study, we analysed the effects of ectopically tethered insulator factors on chromatin structure and found that CP190 induces large-scale decondensation when targeted to a condensed lacO array in mammalian and Drosophila cells. In contrast, dCTCF alone, is unable to cause such a decondensation, however, when CP190 is present, dCTCF recruits it to the lacO array and mediates chromatin unfolding. The CP190 induced opening of chromatin may not be correlated with transcriptional activation, as binding of CP190 does not enhance luciferase activity in reporter assays. We propose that CP190 may mediate histone modification and chromatin remodelling activity to induce an open chromatin state by its direct recruitment or targeting by a DNA binding factor such as dCTCF.

  15. Constraints on modified Chaplygin gas from large scale structure

    NASA Astrophysics Data System (ADS)

    Paul, Bikash Chandra; Thakur, Prasenjit; Beesham, Aroon

    2016-10-01

    We study cosmological models with modified Chaplygin gas (MCG) to determine observational constraints on its EoS parameters using the background and the growth tests data. The background test data consists of H(z)-z data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter, SN Ia data and the growth test data consists of the linear growth function for the large scale structures of the universe are considered to study MCG in favor of dark energy. For a given range of redshift, the Wiggle-Z measurements and rms mass fluctuations from Ly-α data, employed for analyzing cosmological models numerically to constrain the MCG parameters. The Wang-Steinhardt ansatz for the growth index (γ ) and growth function (f) are also considered for numerical analysis. The best-fit values of EoS parameters determined here are used to study the variation of f, growth index (γ ), EoS parameter, squared sound speed and deceleration parameter with redshift. The constraints on the MCG parameters found here are compared with that of GCG (generalized Chaplygin gas) model for viable cosmology. Cosmologies with MCG satisfactorily describe late acceleration followed by a matter dominated phase. The range of values of EoS parameters, the associated parameters (f, γ , ω , Ω, c2s, q) are also determined from observational data in order to understand the suitability of the MCG model.

  16. Kinematics and Dynamics in Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    dell'Antonio, Ian Pietro

    1995-01-01

    We study a sample of x-ray observed groups of galaxies to examine the relation between group velocity dispersions and x-ray luminosities. For the rich groups, Lx~ sigma ^{4.0+/-0.6}, but poorer systems follow a flatter relation. This L_{x }- sigma relation probably arises from a combination of extended gas and individual galaxy emission. We then concentrate on six poor clusters of galaxies with higher-quality x-ray data, and we measure the virial mass, gas mass, and x-ray temperature. From the x-ray surface brightness distribution, we construct models of the mass distribution. We use a modified V/ Vmax test to test whether the galaxies trace the potential marked by the gas. The galaxy distribution is consistent with the density distribution inferred from the x-rays. The mass in galaxies is {~}3h^{-1}% of the total mass of the systems. Galaxies contribute significantly to the baryonic mass total: M_ {gas}/Mgal ~1.4h^{-1/2},~ilar to the value for rich clusters. The baryon fraction in rich groups is {~}0.08 (for Ho=100), about half that in rich clusters. This result has significant implications for the origin of large-scale structure. In a study of structure on a larger scale, we use the Tully-Fisher (TF) relation to examine the kinematics of the Great Wall of Galaxies. First, we examine the relation between rotation profiles of galaxies and HI linewidths, and investigate the effects on the TF relation. The rotation curve profile shapes and magnitudes of galaxies are correlated, implying that a galaxy yields different distance estimates with a linewidth measured at a different fraction of peak emission. Indiscriminatingly combining data based on different measures of the "rotation velocity" into a single TF relation leads to systematic errors and biases in the velocity field. We evaluate these effects using optical rotation curves and HI linewidth data. The TF relation can be improved by adding shape parameters to characterize the HI profiles. We construct the I

  17. Modeling the three-point correlation function

    SciTech Connect

    Marin, Felipe; Wechsler, Risa; Frieman, Joshua A.; Nichol, Robert; /Portsmouth U., ICG

    2007-04-01

    We present new theoretical predictions for the galaxy three-point correlation function (3PCF) using high-resolution dissipationless cosmological simulations of a flat {Lambda}CDM Universe which resolve galaxy-size halos and subhalos. We create realistic mock galaxy catalogs by assigning luminosities and colors to dark matter halos and subhalos, and we measure the reduced 3PCF as a function of luminosity and color in both real and redshift space. As galaxy luminosity and color are varied, we find small differences in the amplitude and shape dependence of the reduced 3PCF, at a level qualitatively consistent with recent measurements from the SDSS and 2dFGRS. We confirm that discrepancies between previous 3PCF measurements can be explained in part by differences in binning choices. We explore the degree to which a simple local bias model can fit the simulated 3PCF. The agreement between the model predictions and galaxy 3PCF measurements lends further credence to the straightforward association of galaxies with CDM halos and subhalos.

  18. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments. PMID:27659986

  19. Large-scale anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, J.; Wilson, M. L.

    1981-01-01

    Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.

  20. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.

  1. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  2. Large-scale studies of marked birds in North America

    USGS Publications Warehouse

    Tautin, J.; Metras, L.; Smith, G.

    1999-01-01

    The first large-scale, co-operative, studies of marked birds in North America were attempted in the 1950s. Operation Recovery, which linked numerous ringing stations along the east coast in a study of autumn migration of passerines, and the Preseason Duck Ringing Programme in prairie states and provinces, conclusively demonstrated the feasibility of large-scale projects. The subsequent development of powerful analytical models and computing capabilities expanded the quantitative potential for further large-scale projects. Monitoring Avian Productivity and Survivorship, and Adaptive Harvest Management are current examples of truly large-scale programmes. Their exemplary success and the availability of versatile analytical tools are driving changes in the North American bird ringing programme. Both the US and Canadian ringing offices are modifying operations to collect more and better data to facilitate large-scale studies and promote a more project-oriented ringing programme. New large-scale programmes such as the Cornell Nest Box Network are on the horizon.

  3. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  4. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  5. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  6. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of

  7. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  8. Assessing large-scale wildlife responses to human infrastructure development.

    PubMed

    Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos

    2016-07-26

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future.

  9. Assessing large-scale wildlife responses to human infrastructure development.

    PubMed

    Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos

    2016-07-26

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future. PMID:27402749

  10. Large-Scale Molecular and Nanoelectronic Circuits & Associated Opportunities

    NASA Astrophysics Data System (ADS)

    Heath, Jim

    2007-03-01

    According to the International Technology Roadmap for Semiconductors (ITRS), by year 2020 it is expected that the most closely spaced metallic wires within a DRAM circuit will be patterned at a pitch of about 30 nm, implying that the conductors themselves will be of a width of around 15 nm. However, virtually every aspect of achieving this technology is considered to be `red,' meaning that there is no known solution. Nevertheless, ultra-high density (semiconductor and metallic) nanowire circuitry, fabricated at 2020 dimensions and beyond, would be expected to provide a host of value traditional (logic & memory) and nontraditional (sensing, thermoelectrics, etc.) functions. In this talk we will discuss the fabrication and testing of large scale circuitry (> 10^5 devices) aimed a these various applications. This will include a 160,000 bit memory circuit that is no larger than a white blood cell, high-performance, ultra-dense & energy efficient logic circuitry, nanowire sensing arrays, and high-performance silicon-based thermoelectric devices. We will also discuss how these circuits may be fabricated on a host of substrates, including plastic.

  11. Intensive agriculture erodes β-diversity at large scales.

    PubMed

    Karp, Daniel S; Rominger, Andrew J; Zook, Jim; Ranganathan, Jai; Ehrlich, Paul R; Daily, Gretchen C

    2012-09-01

    Biodiversity is declining from unprecedented land conversions that replace diverse, low-intensity agriculture with vast expanses under homogeneous, intensive production. Despite documented losses of species richness, consequences for β-diversity, changes in community composition between sites, are largely unknown, especially in the tropics. Using a 10-year data set on Costa Rican birds, we find that low-intensity agriculture sustained β-diversity across large scales on a par with forest. In high-intensity agriculture, low local (α) diversity inflated β-diversity as a statistical artefact. Therefore, at small spatial scales, intensive agriculture appeared to retain β-diversity. Unlike in forest or low-intensity systems, however, high-intensity agriculture also homogenised vegetation structure over large distances, thereby decoupling the fundamental ecological pattern of bird communities changing with geographical distance. This ~40% decline in species turnover indicates a significant decline in β-diversity at large spatial scales. These findings point the way towards multi-functional agricultural systems that maintain agricultural productivity while simultaneously conserving biodiversity.

  12. Large Scale Structure in the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Koekemoer, Anton; Mould, Jeremy; Cooke, Jeffrey; Wyithe, Stuart; Lidman, Christopher; Trenti, Michele; Abbott, Tim; Kunder, Andrea; Barone-Nugent, Robert; Tescari, Edoardo; Katsianis, Antonios

    2014-02-01

    We propose to capitalize on the high red sensitivity and large field of view of DECam to detect the brightest and rarest galaxies at z=6-7. Our 2012 results show the signature of large scale structure with wavenumber of order 0.1 inverse Mpc in line with expectations of primordial non-gaussianity. But the signal to noise in one deep field from two nights' data is insufficient for a robust conclusion. Ten nights' data will do the job. These data will also constrain the galaxy contribution to reionization by enabling a tighter constraint on the full galaxy luminosity function, including the faint end. The observations will be executed with a cadence and depth that will enable the detection of super-luminous supernovae at z=6-7. Super-luminous supernovae are a recently observed class of supernovae that are 10-100x more luminous than typical supernovae. This class includes pair- instability supernovae that are a rare, third type of supernova explosion in which only 3 events are known. The proposed observations will greatly extend the current reach of supernovae research, examining their occurrence rate and properties near the epoch of reionization.

  13. A Large Scale Survey of Molecular Clouds at Nagoya University

    NASA Astrophysics Data System (ADS)

    Mizuno, A.; Onishi, T.; Yamaguchi, N.; Hara, A.; Hayakawa, T.; Kato, S.; Mizuno, N.; Abe, R.; Saito, H.; Yamaguchi, R.; Mine, Y.; Moriguchi, Y.; Mano, S.; Matsunaga, K.; Tachihara, K.; Kawamura, A.; Yonekura, Y.; Ogawa, H.; Fukui, Y.

    1999-10-01

    Large scale 12CO and 13CO (J=1-0) surveys have been carried out by using two 4-m radio telescopes at Nagoya University since 1990 in order to obtain a complete sample of the Galactic molecular clouds. The southern survey started in 1996 with one of the telescopes, named "NANTEN", installed at the Las Campanas Observatory in Chile. The observations made at a grid spacing of 2' - 8' with a 2.'7 beam allow us to identify and resolve the individual star forming dense cores within 1-2 kpc of the sun. The present coverage in the 12CO and 13CO are ~ 7% and ~ 21% of the sky, respectively. The data are used to derive physical parameters of dense cores and to study the mass spectrum, morphology, and conditions for star formation. For example, the survey revealed that the cloud mass function is fairly universal for various regions (e.g., Yonekura et al. 1998, ApJS, 110, 21), and that star forming clouds tend to be characterized by low Mvir/MLTE (e.g., Kawamura et al. 1998, ApJS, 117, 387; Mizuno et al. 1999, PASJ, in press). The survey will provide invaluable database of southern star and planet forming regions, one of the important scientific targets of ALMA.

  14. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    PubMed

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. PMID:27568589

  15. Soft-Pion theorems for large scale structure

    NASA Astrophysics Data System (ADS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2014-09-01

    Consistency relations — which relate an N-point function to a squeezed (N+1)-point function — are useful in large scale structure (LSS) because of their non-perturbative nature: they hold even if the N-point function is deep in the nonlinear regime, and even if they involve astrophysically messy galaxy observables. The non-perturbative nature of the consistency relations is guaranteed by the fact that they are symmetry statements, in which the velocity plays the role of the soft pion. In this paper, we address two issues: (1) how to derive the relations systematically using the residual coordinate freedom in the Newtonian gauge, and relate them to known results in ζ-gauge (often used in studies of inflation); (2) under what conditions the consistency relations are violated. In the non-relativistic limit, our derivation reproduces the Newtonian consistency relation discovered by Kehagias & Riotto and Peloso & Pietroni. More generally, there is an infinite set of consistency relations, as is known in ζ-gauge. There is a one-to-one correspondence between symmetries in the two gauges; in particular, the Newtonian consistency relation follows from the dilation and special conformal symmetries in ζ-gauge. We probe the robustness of the consistency relations by studying models of galaxy dynamics and biasing. We give a systematic list of conditions under which the consistency relations are violated; violations occur if the galaxy bias is non-local in an infrared divergent way. We emphasize the relevance of the adiabatic mode condition, as distinct from symmetry considerations. As a by-product of our investigation, we discuss a simple fluid Lagrangian for LSS.

  16. Local Large-Scale Structure and the Assumption of Homogeneity

    NASA Astrophysics Data System (ADS)

    Keenan, Ryan C.; Barger, Amy J.; Cowie, Lennox L.

    2016-10-01

    Our recent estimates of galaxy counts and the luminosity density in the near-infrared (Keenan et al. 2010, 2012) indicated that the local universe may be under-dense on radial scales of several hundred megaparsecs. Such a large-scale local under-density could introduce significant biases in the measurement and interpretation of cosmological observables, such as the inferred effects of dark energy on the rate of expansion. In Keenan et al. (2013), we measured the K-band luminosity density as a function of distance from us to test for such a local under-density. We made this measurement over the redshift range 0.01 < z < 0.2 (radial distances D ~ 50 - 800 h 70 -1 Mpc). We found that the shape of the K-band luminosity function is relatively constant as a function of distance and environment. We derive a local (z < 0.07, D < 300 h 70 -1 Mpc) K-band luminosity density that agrees well with previously published studies. At z > 0.07, we measure an increasing luminosity density that by z ~ 0.1 rises to a value of ~ 1.5 times higher than that measured locally. This implies that the stellar mass density follows a similar trend. Assuming that the underlying dark matter distribution is traced by this luminous matter, this suggests that the local mass density may be lower than the global mass density of the universe at an amplitude and on a scale that is sufficient to introduce significant biases into the measurement of basic cosmological observables. At least one study has shown that an under-density of roughly this amplitude and scale could resolve the apparent tension between direct local measurements of the Hubble constant and those inferred by Planck team. Other theoretical studies have concluded that such an under-density could account for what looks like an accelerating expansion, even when no dark energy is present.

  17. Soft-Pion theorems for large scale structure

    SciTech Connect

    Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lhui@astro.columbia.edu

    2014-09-01

    Consistency relations — which relate an N-point function to a squeezed (N+1)-point function — are useful in large scale structure (LSS) because of their non-perturbative nature: they hold even if the N-point function is deep in the nonlinear regime, and even if they involve astrophysically messy galaxy observables. The non-perturbative nature of the consistency relations is guaranteed by the fact that they are symmetry statements, in which the velocity plays the role of the soft pion. In this paper, we address two issues: (1) how to derive the relations systematically using the residual coordinate freedom in the Newtonian gauge, and relate them to known results in ζ-gauge (often used in studies of inflation); (2) under what conditions the consistency relations are violated. In the non-relativistic limit, our derivation reproduces the Newtonian consistency relation discovered by Kehagias and Riotto and Peloso and Pietroni. More generally, there is an infinite set of consistency relations, as is known in ζ-gauge. There is a one-to-one correspondence between symmetries in the two gauges; in particular, the Newtonian consistency relation follows from the dilation and special conformal symmetries in ζ-gauge. We probe the robustness of the consistency relations by studying models of galaxy dynamics and biasing. We give a systematic list of conditions under which the consistency relations are violated; violations occur if the galaxy bias is non-local in an infrared divergent way. We emphasize the relevance of the adiabatic mode condition, as distinct from symmetry considerations. As a by-product of our investigation, we discuss a simple fluid Lagrangian for LSS.

  18. Large-scale first principles configuration interaction calculations of optical absorption in aluminum clusters.

    PubMed

    Shinde, Ravindra; Shukla, Alok

    2014-10-14

    We report the linear optical absorption spectra of aluminum clusters Aln (n = 2-5) involving valence transitions, computed using the large-scale all-electron configuration interaction (CI) methodology. Several low-lying isomers of each cluster were considered, and their geometries were optimized at the coupled-cluster singles-doubles (CCSD) level of theory. With these optimized ground-state geometries, excited states of different clusters were computed using the multi-reference singles-doubles configuration-interaction (MRSDCI) approach, which includes electron correlation effects at a sophisticated level. These CI wave functions were used to compute the transition dipole matrix elements connecting the ground and various excited states of different clusters, and thus their photoabsorption spectra. The convergence of our results with respect to the basis sets, and the size of the CI expansion, was carefully examined. Our results were found to be significantly different as compared to those obtained using time-dependent density functional theory (TDDFT) [Deshpande et al. Phys. Rev. B: Condens. Matter Mater. Phys., 2003, 68, 035428]. When compared to the available experimental data for the isomers of Al2 and Al3, our results are in very good agreement as far as important peak positions are concerned. The contribution of configurations to many body wave functions of various excited states suggests that in most cases optical excitations involved are collective, and plasmonic in nature. PMID:25162600

  19. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  20. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  1. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts. PMID:24951697

  2. A vector reconstruction based clustering algorithm particularly for large-scale text collection.

    PubMed

    Liu, Ming; Wu, Chong; Chen, Lei

    2015-03-01

    Along with the fast evolvement of internet technology, internet users have to face the large amount of textual data every day. Apparently, organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection, which mainly attributes to the high-dimensional vector space and semantic similarity among texts. To effectively and efficiently cluster large-scale text collection, this paper puts forward a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster's representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature's weight is fine-tuned by iterative process similar to self-organizing-mapping (SOM) algorithm. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster's representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high-quality performances on both small-scale and large-scale text collections.

  3. Statistical analysis of mesoscale rainfall: Dependence of a random cascade generator on large-scale forcing

    NASA Technical Reports Server (NTRS)

    Over, Thomas, M.; Gupta, Vijay K.

    1994-01-01

    Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.

  4. Functional correlates of compensatory renal hypertrophy

    PubMed Central

    Hayslett, John P.; Kashgarian, Michael; Epstein, Franklin H.

    1968-01-01

    The functional correlates of compensatory renal hypertrophy were studied by micropuncture techniques in rats after the removal of one kidney. The glomerular filtration rate increased to roughly the same extent in the whole kidney and in individual surface nephrons, resulting in a greater amount of sodium delivered to the tubules for reabsorption. The fraction of the glomerular filtrate absorbed [determined from the tubular fluid-to-plasma ratio (TF/P) for inulin] remained unchanged in both proximal and distal portions of the nephron. The way in which the tubules adjusted to nephrectomy, however, differed in proximal and distal convolutions. After nephrectomy, the reabsorptive half-time, indicated by the rate of shrinkage of a droplet of saline in a tubule blocked with oil, was unchanged in the proximal tubule but significantly shortened in the distal convoluted tubule. Nevertheless, steady-state concentrations of sodium in an isolated raffinose droplet in the distal as well as the proximal tubule were the same in hypertrophied kidneys as in control animals. Possible reasons for this paradox are discussed. Transit time through the proximal tubules was unchanged by compensatory hypertrophy, but transit time to the distal tubules was prolonged. Changes in renal structure resulting from compensatory hypertrophy were also found to differ in the proximal and the distal protions of the nephron. Although tubular volume increased in both protions, the volume increase was twice as great in the proximal tubule as in the distal. In order, therefore, for net reabsorption to increase in the distal tubule, where the changes in tubular volume are not so marked, an increase in reabsorptive capacity per unit length of tubule is required. This increase is reflected in the shortening of reabsorptive half-time in the oil-blocked distal tubule that was actually observed. PMID:5641618

  5. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  6. Spatiotemporal dynamics of large-scale brain activity

    NASA Astrophysics Data System (ADS)

    Neuman, Jeremy

    Understanding the dynamics of large-scale brain activity is a tough challenge. One reason for this is the presence of an incredible amount of complexity arising from having roughly 100 billion neurons connected via 100 trillion synapses. Because of the extremely high number of degrees of freedom in the nervous system, the question of how the brain manages to properly function and remain stable, yet also be adaptable, must be posed. Neuroscientists have identified many ways the nervous system makes this possible, of which synaptic plasticity is possibly the most notable one. On the other hand, it is vital to understand how the nervous system also loses stability, resulting in neuropathological diseases such as epilepsy, a disease which affects 1% of the population. In the following work, we seek to answer some of these questions from two different perspectives. The first uses mean-field theory applied to neuronal populations, where the variables of interest are the percentages of active excitatory and inhibitory neurons in a network, to consider how the nervous system responds to external stimuli, self-organizes and generates epileptiform activity. The second method uses statistical field theory, in the framework of single neurons on a lattice, to study the concept of criticality, an idea borrowed from physics which posits that in some regime the brain operates in a collectively stable or marginally stable manner. This will be examined in two different neuronal networks with self-organized criticality serving as the overarching theme for the union of both perspectives. One of the biggest problems in neuroscience is the question of to what extent certain details are significant to the functioning of the brain. These details give rise to various spatiotemporal properties that at the smallest of scales explain the interaction of single neurons and synapses and at the largest of scales describe, for example, behaviors and sensations. In what follows, we will shed some

  7. Emergence of coherent structures and large-scale flows in motile suspensions.

    PubMed

    Saintillan, David; Shelley, Michael J

    2012-03-01

    The emergence of coherent structures, large-scale flows and correlated dynamics in suspensions of motile particles such as swimming micro-organisms or artificial microswimmers is studied using direct particle simulations. A detailed model is proposed for a slender rod-like particle that propels itself in a viscous fluid by exerting a prescribed tangential stress on its surface, and a method is devised for the efficient calculation of hydrodynamic interactions in large-scale suspensions of such particles using slender-body theory and a smooth particle-mesh Ewald algorithm. Simulations are performed with periodic boundary conditions for various system sizes and suspension volume fractions, and demonstrate a transition to large-scale correlated motions in suspensions of rear-actuated swimmers, or Pushers, above a critical volume fraction or system size. This transition, which is not observed in suspensions of head-actuated swimmers, or Pullers, is seen most clearly in particle velocity and passive tracer statistics. These observations are consistent with predictions from our previous mean-field kinetic theory, one of which states that instabilities will arise in uniform isotropic suspensions of Pushers when the product of the linear system size with the suspension volume fraction exceeds a given threshold. We also find that the collective dynamics of Pushers result in giant number fluctuations, local alignment of swimmers and strongly mixing flows. Suspensions of Pullers, which evince no large-scale dynamics, nonetheless display interesting deviations from the random isotropic state.

  8. Unified Access Architecture for Large-Scale Scientific Datasets

    NASA Astrophysics Data System (ADS)

    Karna, Risav

    2014-05-01

    Data-intensive sciences have to deploy diverse large scale database technologies for data analytics as scientists have now been dealing with much larger volume than ever before. While array databases have bridged many gaps between the needs of data-intensive research fields and DBMS technologies (Zhang 2011), invocation of other big data tools accompanying these databases is still manual and separate the database management's interface. We identify this as an architectural challenge that will increasingly complicate the user's work flow owing to the growing number of useful but isolated and niche database tools. Such use of data analysis tools in effect leaves the burden on the user's end to synchronize the results from other data manipulation analysis tools with the database management system. To this end, we propose a unified access interface for using big data tools within large scale scientific array database using the database queries themselves to embed foreign routines belonging to the big data tools. Such an invocation of foreign data manipulation routines inside a query into a database can be made possible through a user-defined function (UDF). UDFs that allow such levels of freedom as to call modules from another language and interface back and forth between the query body and the side-loaded functions would be needed for this purpose. For the purpose of this research we attempt coupling of four widely used tools Hadoop (hadoop1), Matlab (matlab1), R (r1) and ScaLAPACK (scalapack1) with UDF feature of rasdaman (Baumann 98), an array-based data manager, for investigating this concept. The native array data model used by an array-based data manager provides compact data storage and high performance operations on ordered data such as spatial data, temporal data, and matrix-based data for linear algebra operations (scidbusr1). Performances issues arising due to coupling of tools with different paradigms, niche functionalities, separate processes and output

  9. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  10. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  11. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  12. Human pescadillo induces large-scale chromatin unfolding.

    PubMed

    Zhang, Hao; Fang, Yan; Huang, Cuifen; Yang, Xiao; Ye, Qinong

    2005-06-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  13. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    ERIC Educational Resources Information Center

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  14. The large scale microelectronics Computer-Aided Design and Test (CADAT) system

    NASA Technical Reports Server (NTRS)

    Gould, J. M.

    1978-01-01

    The CADAT system consists of a number of computer programs written in FORTRAN that provide the capability to simulate, lay out, analyze, and create the artwork for large scale microelectronics. The function of each software component of the system is described with references to specific documentation for each software component.

  15. THE DETECTION OF THE LARGE-SCALE ALIGNMENT OF MASSIVE GALAXIES AT z {approx} 0.6

    SciTech Connect

    Li Cheng; Jing, Y. P.; Faltenbacher, A.; Wang Jie

    2013-06-10

    We report on the detection of the alignment between galaxies and large-scale structure at z {approx} 0.6 based on the CMASS galaxy sample from the Baryon Oscillation Spectroscopy Survey Data Release 9. We use two statistics to quantify the alignment signal: (1) the alignment two-point correlation function that probes the dependence of galaxy clustering at a given separation in redshift space on the projected angle ({theta}{sub p}) between the orientation of galaxies and the line connecting to other galaxies, and (2) the cos (2{theta})-statistic that estimates the average of cos (2{theta}{sub p}) for all correlated pairs at a given separation s. We find a significant alignment signal out to about 70 h {sup -1} Mpc in both statistics. Applications of the same statistics to dark matter halos of mass above 10{sup 12} h {sup -1} M{sub Sun} in a large cosmological simulation show scale-dependent alignment signals similar to the observation, but with higher amplitudes at all scales probed. We show that this discrepancy may be partially explained by a misalignment angle between central galaxies and their host halos, though detailed modeling is needed in order to better understand the link between the orientations of galaxies and host halos. In addition, we find systematic trends of the alignment statistics with the stellar mass of the CMASS galaxies, in the sense that more massive galaxies are more strongly aligned with the large-scale structure.

  16. Evolution of the galaxy correlation function at redshifts 0.2 < z < 3

    NASA Astrophysics Data System (ADS)

    Sołtan, Andrzej M.

    2016-10-01

    We determine the auto-correlation function (ACF) of galaxies using massive deep galaxy surveys for which distances to individual objects are assessed using photometric redshifts. The method is applied to the 2deg COSMOS survey of ~ 300000 galaxies with i + < 25 and z ph <~ 3. The distance estimates based on photometric redshifts are not sufficiently accurate to be directly used to determine the ACF. Nevertheless, the photometric redshifts carry statistical information on the data distribution on (very) large scales. The investigation of the surface distribution of galaxies in several redshift (=distance) bins allows us to determine the spatial (3D) ACF over the redshift range of 0.2 - 3.2 or look back time of 2.4 - 11.5 Gy.

  17. Large covariance matrices: smooth models from the two-point correlation function

    NASA Astrophysics Data System (ADS)

    O'Connell, Ross; Eisenstein, Daniel; Vargas, Mariana; Ho, Shirley; Padmanabhan, Nikhil

    2016-11-01

    We introduce a new method for estimating the covariance matrix for the galaxy correlation function in surveys of large-scale structure. Our method combines simple theoretical results with a realistic characterization of the survey to dramatically reduce noise in the covariance matrix. For example, with an investment of only ≈1000 CPU hours we can produce a model covariance matrix with noise levels that would otherwise require ˜35 000 mocks. Non-Gaussian contributions to the model are calibrated against mock catalogues, after which the model covariance is found to be in impressive agreement with the mock covariance matrix. Since calibration of this method requires fewer mocks than brute force approaches, we believe that it could dramatically reduce the number of mocks required to analyse future surveys.

  18. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  19. Large Scale Meteorological Pattern of Extreme Rainfall in Indonesia

    NASA Astrophysics Data System (ADS)

    Kuswanto, Heri; Grotjahn, Richard; Rachmi, Arinda; Suhermi, Novri; Oktania, Erma; Wijaya, Yosep

    2014-05-01

    dates involving observations from multiple sites (rain gauges). The approach combines the POT (Peaks Over Threshold) with 'declustering' of the data to approximate independence based on the autocorrelation structure of each rainfall series. The cross correlation among sites is considered also to develop the event's criteria yielding a rational choice of the extreme dates given the 'spotty' nature of the intense convection. Based on the identified dates, we are developing a supporting tool for forecasting extreme rainfall based on the corresponding large-scale meteorological patterns (LSMPs). The LSMPs methodology focuses on the larger-scale patterns that the model are better able to forecast, as those larger-scale patterns create the conditions fostering the local EWE. Bootstrap resampling method is applied to highlight the key features that statistically significant with the extreme events. Grotjahn, R., and G. Faure. 2008: Composite Predictor Maps of Extraordinary Weather Events in the Sacramento California Region. Weather and Forecasting. 23: 313-335.

  20. Large Scale Structure Studies: Final Results from a Rich Cluster Redshift Survey

    NASA Astrophysics Data System (ADS)

    Slinglend, K.; Batuski, D.; Haase, S.; Hill, J.

    1995-12-01

    The results from the COBE satellite show the existence of structure on scales on the order of 10% or more of the horizon scale of the universe. Rich clusters of galaxies from the Abell-ACO catalogs show evidence of structure on scales of 100 Mpc and hold the promise of confirming structure on the scale of the COBE result. Unfortunately, until now, redshift information has been unavailable for a large percentage of these clusters, so present knowledge of their three dimensional distribution has quite large uncertainties. Our approach in this effort has been to use the MX multifiber spectrometer on the Steward 2.3m to measure redshifts of at least ten galaxies in each of 88 Abell cluster fields with richness class R>= 1 and mag10 <= 16.8 (estimated z<= 0.12) and zero or one measured redshifts. This work has resulted in a deeper, 95% complete and more reliable sample of 3-D positions of rich clusters. The primary intent of this survey has been to constrain theoretical models for the formation of the structure we see in the universe today through 2-pt. spatial correlation function and other analyses of the large scale structures traced by these clusters. In addition, we have obtained enough redshifts per cluster to greatly improve the quality and size of the sample of reliable cluster velocity dispersions available for use in other studies of cluster properties. This new data has also allowed the construction of an updated and more reliable supercluster candidate catalog. Our efforts have resulted in effectively doubling the volume traced by these clusters. Presented here is the resulting 2-pt. spatial correlation function, as well as density plots and several other figures quantifying the large scale structure from this much deeper and complete sample. Also, with 10 or more redshifts in most of our cluster fields, we have investigated the extent of projection effects within the Abell catalog in an effort to quantify and understand how this may effect the Abell sample.

  1. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  2. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  3. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  4. Large-scale superfluid vortex rings at nonzero temperatures

    NASA Astrophysics Data System (ADS)

    Wacks, D. H.; Baggaley, A. W.; Barenghi, C. F.

    2014-12-01

    We numerically model experiments in which large-scale vortex rings—bundles of quantized vortex loops—are created in superfluid helium by a piston-cylinder arrangement. We show that the presence of a normal-fluid vortex ring together with the quantized vortices is essential to explain the coherence of these large-scale vortex structures at nonzero temperatures, as observed experimentally. Finally we argue that the interaction of superfluid and normal-fluid vortex bundles is relevant to recent investigations of superfluid turbulence.

  5. Calibration of hydraulic parameters for large-scale vertical flow constructed wetlands

    NASA Astrophysics Data System (ADS)

    Maier, Uli; DeBiase, Cecilia; Baeder-Bederski, Oliver; Bayer, Peter

    2009-05-01

    SummaryConstructed wetlands for water cleanup have been in use for several years and are promising for cost-efficient remediation of large scale contamination. Within this study, flow conditions in layered vertical soil filters used for remediation of contaminated groundwater were investigated in detail by special discharge experiments and an attuned modeling study. Unsaturated water flow was measured in two vertical flow constructed wetlands for contaminated groundwater treatment at a site in eastern Germany. Numerical simulations were performed using the code MIN3P, in which variably saturated flow is based on the Richards equation. Soil hydraulic functions based on Van Genuchten coefficients and preferential flow characteristics were obtained by calibrating the model to measured data using self-adaptive evolution strategies with covariance matrix adaptation (CMA-ES). The presented inverse modeling procedure not only provides best fit parameterizations for separate and joint model objectives, but also utilizes the information from multiple restarts of the optimization algorithm to determine suitable parameter ranges and reveal potential correlations. The sequential automatic calibration is both straightforward and efficient even if different complex objective functions are considered.

  6. Large Scale Electronic Structure Calculations using Quantum Chemistry Methods

    NASA Astrophysics Data System (ADS)

    Scuseria, Gustavo E.

    1998-03-01

    This talk will address our recent efforts in developing fast, linear scaling electronic structure methods for large scale applications. Of special importance is our fast multipole method( M. C. Strain, G. E. Scuseria, and M. J. Frisch, Science 271), 51 (1996). (FMM) for achieving linear scaling for the quantum Coulomb problem (GvFMM), the traditional bottleneck in quantum chemistry calculations based on Gaussian orbitals. Fast quadratures(R. E. Stratmann, G. E. Scuseria, and M. J. Frisch, Chem. Phys. Lett. 257), 213 (1996). combined with methods that avoid the Hamiltonian diagonalization( J. M. Millam and G. E. Scuseria, J. Chem. Phys. 106), 5569 (1997) have resulted in density functional theory (DFT) programs that can be applied to systems containing many hundreds of atoms and ---depending on computational resources or level of theory-- to many thousands of atoms.( A. D. Daniels, J. M. Millam and G. E. Scuseria, J. Chem. Phys. 107), 425 (1997). Three solutions for the diagonalization bottleneck will be analyzed and compared: a conjugate gradient density matrix search (CGDMS), a Hamiltonian polynomial expansion of the density matrix, and a pseudo-diagonalization method. Besides DFT, our near-field exchange method( J. C. Burant, G. E. Scuseria, and M. J. Frisch, J. Chem. Phys. 105), 8969 (1996). for linear scaling Hartree-Fock calculations will be discussed. Based on these improved capabilities, we have also developed programs to obtain vibrational frequencies (via analytic energy second derivatives) and excitation energies (through time-dependent DFT) of large molecules like porphyn or C_70. Our GvFMM has been extended to periodic systems( K. N. Kudin and G. E. Scuseria, Chem. Phys. Lett., in press.) and progress towards a Gaussian-based DFT and HF program for polymers and solids will be reported. Last, we will discuss our progress on a Laplace-transformed \\cal O(N^2) second-order pertubation theory (MP2) method.

  7. Ecosystem resilience despite large-scale altered hydroclimatic conditions.

    PubMed

    Ponce Campos, Guillermo E; Moran, M Susan; Huete, Alfredo; Zhang, Yongguang; Bresloff, Cynthia; Huxman, Travis E; Eamus, Derek; Bosch, David D; Buda, Anthony R; Gunter, Stacey A; Scalley, Tamara Heartsill; Kitchen, Stanley G; McClaran, Mitchel P; McNab, W Henry; Montoya, Diane S; Morgan, Jack A; Peters, Debra P C; Sadler, E John; Seyfried, Mark S; Starks, Patrick J

    2013-02-21

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological model for many regions. Large-scale, warm droughts have recently occurred in North America, Africa, Europe, Amazonia and Australia, resulting in major effects on terrestrial ecosystems, carbon balance and food security. Here we compare the functional response of above-ground net primary production to contrasting hydroclimatic periods in the late twentieth century (1975-1998), and drier, warmer conditions in the early twenty-first century (2000-2009) in the Northern and Southern Hemispheres. We find a common ecosystem water-use efficiency (WUE(e): above-ground net primary production/evapotranspiration) across biomes ranging from grassland to forest that indicates an intrinsic system sensitivity to water availability across rainfall regimes, regardless of hydroclimatic conditions. We found higher WUE(e) in drier years that increased significantly with drought to a maximum WUE(e) across all biomes; and a minimum native state in wetter years that was common across hydroclimatic periods. This indicates biome-scale resilience to the interannual variability associated with the early twenty-first century drought--that is, the capacity to tolerate low, annual precipitation and to respond to subsequent periods of favourable water balance. These findings provide a conceptual model of ecosystem properties at the decadal scale applicable to the widespread altered hydroclimatic conditions that are predicted for later this century. Understanding the hydroclimatic threshold that will break down ecosystem resilience and alter maximum WUE(e) may allow us to predict land-surface consequences as large regions become more arid, starting with water-limited, low-productivity grasslands.

  8. A cumulant functional for static and dynamic correlation

    NASA Astrophysics Data System (ADS)

    Hollett, Joshua W.; Hosseini, Hessam; Menzies, Cameron

    2016-08-01

    A functional for the cumulant energy is introduced. The functional is composed of a pair-correction and static and dynamic correlation energy components. The pair-correction and static correlation energies are functionals of the natural orbitals and the occupancy transferred between near-degenerate orbital pairs, rather than the orbital occupancies themselves. The dynamic correlation energy is a functional of the statically correlated on-top two-electron density. The on-top density functional used in this study is the well-known Colle-Salvetti functional. Using the cc-pVTZ basis set, the functional effectively models the bond dissociation of H2, LiH, and N2 with equilibrium bond lengths and dissociation energies comparable to those provided by multireference second-order perturbation theory. The performance of the cumulant functional is less impressive for HF and F2, mainly due to an underestimation of the dynamic correlation energy by the Colle-Salvetti functional.

  9. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  10. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  11. Potential and issues in large scale flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Di Baldassarre, Giuliano; Brandimarte, Luigia; Dottori, Francesco; Mazzoleni, Maurizio; Yan, Kun

    2015-04-01

    The last years have seen a growing research interest on large scale flood inundation modelling. Nowadays, modelling tools and datasets allow for analyzing flooding processes at regional, continental and even global scale with an increasing level of detail. As a result, several research works have already addressed this topic using different methodologies of varying complexity. The potential of these studies is certainly enormous. Large scale flood inundation modelling can provide valuable information in areas where few information and studies were previously available. They can provide a consistent framework for a comprehensive assessment of flooding processes in the river basins of world's large rivers, as well as impacts of future climate scenarios. To make the most of such a potential, we believe it is necessary, on the one hand, to understand strengths and limitations of the existing methodologies, and on the other hand, to discuss possibilities and implications of using large scale flood models for operational flood risk assessment and management. Where should researchers put their effort, in order to develop useful and reliable methodologies and outcomes? How the information coming from large scale flood inundation studies can be used by stakeholders? How should we use this information where previous higher resolution studies exist, or where official studies are available?

  12. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  13. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  14. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  15. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  16. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  17. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  18. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  19. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney-Hasegawa-Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  20. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  1. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  2. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  3. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  4. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  5. Large-scale societal changes and intentionality - an uneasy marriage.

    PubMed

    Bodor, Péter; Fokas, Nikos

    2014-08-01

    Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable. PMID:25162863

  6. Implicit solution of large-scale radiation diffusion problems

    SciTech Connect

    Brown, P N; Graziani, F; Otero, I; Woodward, C S

    2001-01-04

    In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.

  7. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  8. Simulation and Analysis of Large-Scale Compton Imaging Detectors

    SciTech Connect

    Manini, H A; Lange, D J; Wright, D M

    2006-12-27

    We perform simulations of two types of large-scale Compton imaging detectors. The first type uses silicon and germanium detector crystals, and the second type uses silicon and CdZnTe (CZT) detector crystals. The simulations use realistic detector geometry and parameters. We analyze the performance of each type of detector, and we present results using receiver operating characteristics (ROC) curves.

  9. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  10. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  11. CACHE Guidelines for Large-Scale Computer Programs.

    ERIC Educational Resources Information Center

    National Academy of Engineering, Washington, DC. Commission on Education.

    The Computer Aids for Chemical Engineering Education (CACHE) guidelines identify desirable features of large-scale computer programs including running cost and running-time limit. Also discussed are programming standards, documentation, program installation, system requirements, program testing, and program distribution. Lists of types of…

  12. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  13. The Role of Plausible Values in Large-Scale Surveys

    ERIC Educational Resources Information Center

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  14. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  15. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  16. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  17. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  18. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  19. Improving the Utility of Large-Scale Assessments in Canada

    ERIC Educational Resources Information Center

    Rogers, W. Todd

    2014-01-01

    Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…

  20. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    NASA Astrophysics Data System (ADS)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  1. Toward Large-Scale Continuous EDA: A Random Matrix Theory Perspective.

    PubMed

    Kabán, A; Bootkrajang, J; Durrant, R J

    2016-01-01

    Estimations of distribution algorithms (EDAs) are a major branch of evolutionary algorithms (EA) with some unique advantages in principle. They are able to take advantage of correlation structure to drive the search more efficiently, and they are able to provide insights about the structure of the search space. However, model building in high dimensions is extremely challenging, and as a result existing EDAs may become less attractive in large-scale problems because of the associated large computational requirements. Large-scale continuous global optimisation is key to many modern-day real-world problems. Scaling up EAs to large-scale problems has become one of the biggest challenges of the field. This paper pins down some fundamental roots of the problem and makes a start at developing a new and generic framework to yield effective and efficient EDA-type algorithms for large-scale continuous global optimisation problems. Our concept is to introduce an ensemble of random projections to low dimensions of the set of fittest search points as a basis for developing a new and generic divide-and-conquer methodology. Our ideas are rooted in the theory of random projections developed in theoretical computer science, and in developing and analysing our framework we exploit some recent results in nonasymptotic random matrix theory.

  2. Correlation functions of an autonomous stochastic system with time delays

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Mei, Dong Cheng

    2014-03-01

    The auto-correlation function and the cross-correlation function of an autonomous stochastic system with time delays are investigated. We obtain the distribution curves of the auto-correlation function Cx(s) and Cy(s), and the cross-correlation function C(s) and C(s) of the stochastic dynamic variables by the stochastic simulation method. The delay time changes prominently the behaviors of the dynamical variables of an autonomous stochastic system, which makes the auto-correlation and the cross-correlation of the autonomous stochastic system alternate oscillate periodically from positive to negative, or from negative to positive, decrease gradually, and finally tends to zero with the decay time. The delay time and the noise strength have important impacts for the auto-correlation and the cross-correlation of the autonomous stochastic delay system. The delay time enhances the auto-correlation and the cross-correlation, on the contrary, the noise strength lowers the auto-correlation and the cross-correlation. Under the time delay, by comparison we further show differences of the auto-correlation and the cross-correlation between the dynamical variables x and y.

  3. Cosmological implications of the CMB large-scale structure

    SciTech Connect

    Melia, Fulvio

    2015-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) and Planck may have uncovered several anomalies in the full cosmic microwave background (CMB) sky that could indicate possible new physics driving the growth of density fluctuations in the early universe. These include an unusually low power at the largest scales and an apparent alignment of the quadrupole and octopole moments. In a ΛCDM model where the CMB is described by a Gaussian Random Field, the quadrupole and octopole moments should be statistically independent. The emergence of these low probability features may simply be due to posterior selections from many such possible effects, whose occurrence would therefore not be as unlikely as one might naively infer. If this is not the case, however, and if these features are not due to effects such as foreground contamination, their combined statistical significance would be equal to the product of their individual significances. In the absence of such extraneous factors, and ignoring the biasing due to posterior selection, the missing large-angle correlations would have a probability as low as ∼0.1% and the low-l multipole alignment would be unlikely at the ∼4.9% level; under the least favorable conditions, their simultaneous observation in the context of the standard model could then be likely at only the ∼0.005% level. In this paper, we explore the possibility that these features are indeed anomalous, and show that the corresponding probability of CMB multipole alignment in the R{sub h}=ct universe would then be ∼7–10%, depending on the number of large-scale Sachs–Wolfe induced fluctuations. Since the low power at the largest spatial scales is reproduced in this cosmology without the need to invoke cosmic variance, the overall likelihood of observing both of these features in the CMB is ⩾7%, much more likely than in ΛCDM, if the anomalies are real. The key physical ingredient responsible for this difference is the existence in the former of a

  4. Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.

    PubMed

    Chen, Mou; Tao, Gang

    2016-08-01

    In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.

  5. Modelling the angular correlation function and its full covariance in photometric galaxy surveys

    NASA Astrophysics Data System (ADS)

    Crocce, Martín; Cabré, Anna; Gaztañaga, Enrique

    2011-06-01

    Near-future cosmology will see the advent of wide-area photometric galaxy surveys, such as the Dark Energy Survey (DES), that extend to high redshifts (z˜ 1-2) but give poor radial distance resolution. In such cases splitting the data into redshift bins and using the angular correlation function w(θ), or the Cℓ power spectrum, will become the standard approach to extracting cosmological information or to studying the nature of dark energy through the baryon acoustic oscillations (BAO) probe. In this work we present a detailed model for w(θ) at large scales as a function of redshift and binwidth, including all relevant effects, namely non-linear gravitational clustering, bias, redshift space distortions and photo-z uncertainties. We also present a model for the full covariance matrix, characterizing the angular correlation measurements, that takes into account the same effects as for w(θ) and also the possibility of a shot-noise component and partial sky coverage. Provided with a large-volume N-body simulation from the MICE collaboration, we built several ensembles of mock redshift bins with a sky coverage and depth typical of forthcoming photometric surveys. The model for the angular correlation and the one for the covariance matrix agree remarkably well with the mock measurements in all configurations. The prospects for a full shape analysis of w(θ) at BAO scales in forthcoming photometric surveys such as DES are thus very encouraging.

  6. Approximation of HRPITS results for SI GaAs by large scale support vector machine algorithms

    NASA Astrophysics Data System (ADS)

    Jankowski, Stanisław; Wojdan, Konrad; Szymański, Zbigniew; Kozłowski, Roman

    2006-10-01

    For the first time large-scale support vector machine algorithms are used to extraction defect parameters in semi-insulating (SI) GaAs from high resolution photoinduced transient spectroscopy experiment. By smart decomposition of the data set the SVNTorch algorithm enabled to obtain good approximation of analyzed correlation surface by a parsimonious model (with small number of support vector). The extracted parameters of deep level defect centers from SVM approximation are of good quality as compared to the reference data.

  7. Skewness-induced asymmetric modulation of small-scale turbulence by large-scale structures

    NASA Astrophysics Data System (ADS)

    Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta

    2016-01-01

    Several recent studies discuss of role of skewness of the turbulent velocity fluctuations in near-wall shear layers, in the context of quantifying the correlation between large-scale motions and amplitude variations of small-scale fluctuations—referred to as "modulation." The present study is based on the premise that the skewness of the small-scale fluctuations should be accounted for explicitly in the process of defining their envelope, which characterizes their amplitude variations. This leads to the notion of two envelopes, one for positive and the other for negative small-scale fluctuations, and hence also to two corresponding correlation coefficients. Justification for this concept is provided first by an examination of a high-frequency synthetic signal subjected to realistic skewness-inducing modulation. A new formalism is provided for deriving the two envelopes, and its fidelity is demonstrated for the synthetic test case. The method is then applied to a channel flow at a friction Reynolds number of 4200, for which direct numerical simulation (DNS) data are available. The large-scale and small-scale fields are separated by the empirical mode decomposition method, and the modulation of the small-scale fluctuations by the large scales is examined. Separate maps of the correlation coefficient and of two-point correlations, the latter linking the large-scale motions and the envelopes of the small-scale motions, are derived for the two envelopes pertaining to positive and negative small-scale fluctuations, and these demonstrate a significant sensitivity to the envelope-definition process, especially close to the wall where the skewness of the small-scale fluctuations is the dominant contributor to the total value.

  8. Science and engineering of large scale socio-technical simulations.

    SciTech Connect

    Barrett, C. L.; Eubank, S. G.; Marathe, M. V.; Mortveit, H. S.; Reidys, C. M.

    2001-01-01

    Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation and composition of interdependent local mappings. When certain local mappings and their interdependencies can be related to particular real world objects and interdependencies, it is common to compute the interactions to derive a symbolic model of the global system made up of the corresponding interdependent objects. The formal mathematical and computational account of the simulation provides a particular kind of theoretical explanation of the global system properties and, therefore, insight into how to engineer a complex system to exhibit those properties. This paper considers the methematical foundations and engineering princaples necessary for building large scale simulations of socio-technical systems. Examples of such systems are urban regional transportation systems, the national electrical power markets and grids, the world-wide Internet, vaccine design and deployment, theater war, etc. These systems are composed of large numbers of interacting human, physical and technological components. Some components adapt and learn, exhibit perception, interpretation, reasoning, deception, cooperation and noncooperation, and have economic motives as well as the usual physical properties of interaction. The systems themselves are large and the behavior of sociotechnical systems is tremendously complex. The state of affairs f o r these kinds of systems is characterized by very little satisfactory formal theory, a good decal of very specialized knowledge of subsystems, and a dependence on experience-based practitioners' art. However, these systems are vital and require policy, control, design, implementation and investment. Thus there is motivation to improve the ability to

  9. Signatures of large-scale and local climates on the demography of white-tailed ptarmigan in Rocky Mountain National Park, Colorado, USA.

    PubMed

    Wang, Guiming; Hobbs, N Thompson; Galbraith, Hector; Giesen, Kenneth M

    2002-09-01

    Global climate change may impact wildlife populations by affecting local weather patterns, which, in turn, can impact a variety of ecological processes. However, it is not clear that local variations in ecological processes can be explained by large-scale patterns of climate. The North Atlantic oscillation (NAO) is a large-scale climate phenomenon that has been shown to influence the population dynamics of some animals. Although effects of the NAO on vertebrate population dynamics have been studied, it remains uncertain whether it broadly predicts the impact of weather on species. We examined the ability of local weather data and the NAO to explain the annual variation in population dynamics of white-tailed ptarmigan ( Lagopus leucurus) in Rocky Mountain National Park, USA. We performed canonical correlation analysis on the demographic subspace of ptarmigan and local-climate subspace defined by the empirical orthogonal function (EOF) using data from 1975 to 1999. We found that two subspaces were significantly correlated on the first canonical variable. The Pearson correlation coefficient of the first EOF values of the demographic and local-climate subspaces was significant. The population density and the first EOF of local-climate subspace influenced the ptarmigan population with 1-year lags in the Gompertz model. However, the NAO index was neither related to the first two EOF of local-climate subspace nor to the first EOF of the demographic subspace of ptarmigan. Moreover, the NAO index was not a significant term in the Gompertz model for the ptarmigan population. Therefore, local climate had stronger signature on the demography of ptarmigan than did a large-scale index, i.e., the NAO index. We conclude that local responses of wildlife populations to changing climate may not be adequately explained by models that project large-scale climatic patterns.

  10. Signatures of large-scale and local climates on the demography of white-tailed ptarmigan in Rocky Mountain National Park, Colorado, USA

    NASA Astrophysics Data System (ADS)

    Wang, Guiming; Hobbs, Thompson; Galbraith, Hector; Giesen, Kenneth

    2002-06-01

    Global climate change may impact wildlife populations by affecting local weather patterns, which, in turn, can impact a variety of ecological processes. However, it is not clear that local variations in ecological processes can be explained by large-scale patterns of climate. The North Atlantic oscillation (NAO) is a large-scale climate phenomenon that has been shown to influence the population dynamics of some animals. Although effects of the NAO on vertebrate population dynamics have been studied, it remains uncertain whether it broadly predicts the impact of weather on species. We examined the ability of local weather data and the NAO to explain the annual variation in population dynamics of white-tailed ptarmigan (Lagopus leucurus) in Rocky Mountain National Park, USA. We performed canonical correlation analysis on the demographic subspace of ptarmigan and local-climate subspace defined by the empirical orthogonal function (EOF) using data from 1975 to 1999. We found that two subspaces were significantly correlated on the first canonical variable. The Pearson correlation coefficient of the first EOF values of the demographic and local-climate subspaces was significant. The population density and the first EOF of local-climate subspace influenced the ptarmigan population with 1-year lags in the Gompertz model. However, the NAO index was neither related to the first two EOF of local-climate subspace nor to the first EOF of the demographic subspace of ptarmigan. Moreover, the NAO index was not a significant term in the Gompertz model for the ptarmigan population. Therefore, local climate had stronger signature on the demography of ptarmigan than did a large-scale index, i.e., the NAO index. We conclude that local responses of wildlife populations to changing climate may not be adequately explained by models that project large-scale climatic patterns.

  11. Mining Large Scale Tandem Mass Spectrometry Data for Protein Modifications Using Spectral Libraries.

    PubMed

    Horlacher, Oliver; Lisacek, Frederique; Müller, Markus

    2016-03-01

    Experimental improvements in post-translational modification (PTM) detection by tandem mass spectrometry (MS/MS) has allowed the identification of vast numbers of PTMs. Open modification searches (OMSs) of MS/MS data, which do not require prior knowledge of the modifications present in the sample, further increased the diversity of detected PTMs. Despite much effort, there is still a lack of functional annotation of PTMs. One possibility to narrow the annotation gap is to mine MS/MS data deposited in public repositories and to correlate the PTM presence with biological meta-information attached to the data. Since the data volume can be quite substantial and contain tens of millions of MS/MS spectra, the data mining tools must be able to cope with big data. Here, we present two tools, Liberator and MzMod, which are built using the MzJava class library and the Apache Spark large scale computing framework. Liberator builds large MS/MS spectrum libraries, and MzMod searches them in an OMS mode. We applied these tools to a recently published set of 25 million spectra from 30 human tissues and present tissue specific PTMs. We also compared the results to the ones obtained with the OMS tool MODa and the search engine X!Tandem.

  12. MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION

    SciTech Connect

    Bauer, Anne H.; Jerke, Jonathan; Scalzo, Richard; Rabinowitz, David; Ellman, Nancy; Baltay, Charles

    2011-05-10

    We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities are tightly correlated, on average. Magnification due to gravitational lensing increases the quasars' apparent luminosity, while leaving the variability amplitude unchanged. Therefore, the mean magnification of an ensemble of quasars can be measured through the mean shift in the variability-luminosity relation. As a proof of principle, we use this technique to measure the magnification of quasars spectroscopically identified in the Sloan Digital Sky Survey (SDSS), due to gravitational lensing by galaxy clusters in the SDSS MaxBCG catalog. The Palomar-QUEST Variability Survey, reduced using the DeepSky pipeline, provides variability data for the sources. We measure the average quasar magnification as a function of scaled distance (r/R{sub 200}) from the nearest cluster; our measurements are consistent with expectations assuming Navarro-Frenk-White cluster profiles, particularly after accounting for the known uncertainty in the clusters' centers. Variability-based lensing measurements are a valuable complement to shape-based techniques because their systematic errors are very different, and also because the variability measurements are amenable to photometric errors of a few percent and to depths seen in current wide-field surveys. Given the volume data of the expected from current and upcoming surveys, this new technique has the potential to be competitive with weak lensing shear measurements of large-scale structure.

  13. Measuring Lensing Magnification of Quasars by Large Scale Structure Using the Variability-Luminosity Relation

    NASA Astrophysics Data System (ADS)

    Bauer, Anne H.; Seitz, Stella; Jerke, Jonathan; Scalzo, Richard; Rabinowitz, David; Ellman, Nancy; Baltay, Charles

    2011-05-01

    We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities are tightly correlated, on average. Magnification due to gravitational lensing increases the quasars' apparent luminosity, while leaving the variability amplitude unchanged. Therefore, the mean magnification of an ensemble of quasars can be measured through the mean shift in the variability-luminosity relation. As a proof of principle, we use this technique to measure the magnification of quasars spectroscopically identified in the Sloan Digital Sky Survey (SDSS), due to gravitational lensing by galaxy clusters in the SDSS MaxBCG catalog. The Palomar-QUEST Variability Survey, reduced using the DeepSky pipeline, provides variability data for the sources. We measure the average quasar magnification as a function of scaled distance (r/R 200) from the nearest cluster; our measurements are consistent with expectations assuming Navarro-Frenk-White cluster profiles, particularly after accounting for the known uncertainty in the clusters' centers. Variability-based lensing measurements are a valuable complement to shape-based techniques because their systematic errors are very different, and also because the variability measurements are amenable to photometric errors of a few percent and to depths seen in current wide-field surveys. Given the volume data of the expected from current and upcoming surveys, this new technique has the potential to be competitive with weak lensing shear measurements of large-scale structure.

  14. Areal and laminar differentiation in the mouse neocortex using large scale gene expression data.

    PubMed

    Hawrylycz, Mike; Bernard, Amy; Lau, Chris; Sunkin, Susan M; Chakravarty, M Mallar; Lein, Ed S; Jones, Allan R; Ng, Lydia

    2010-02-01

    Although cytoarchitectonic organization of the mammalian cortex into different lamina has been well-studied, identifying the architectural differences that distinguish cortical areas from one another is more challenging. Localization of large anatomical structures is possible using magnetic resonance imaging or invasive techniques (such as anterograde or retrograde tracing), but identifying patterns in gene expression architecture is limited as gene products do not necessarily identify an immediate functional consequence of a specialized area. Expression of specific genes in the mouse and human cortex is most often identified across entire lamina, and areal patterning of expression (when it exists) is most easily differentiated on a layer-by-layer basis. Since cortical organization is defined by the expression of large sets of genes, the task of identifying individual (or groups of structures) cannot be done using individual areal markers. In this manuscript we describe a methodology for clustering gene expression correlation profiles in the C57Bl/6J mouse cortex to identify large-scale genetic relationships between layers and areas. By using the Anatomic Gene Expression Atlas (http://mouse.brain-map.org/agea/) derived from in situ hybridization data in the Allen Brain Atlas, we show that a consistent expression based organization of areal patterning in the mouse cortex exists when clustered on a laminar basis. Surface-based mapping and visualization techniques are used as a representation to clarify these relationships. PMID:19800006

  15. Mining Large Scale Tandem Mass Spectrometry Data for Protein Modifications Using Spectral Libraries.

    PubMed

    Horlacher, Oliver; Lisacek, Frederique; Müller, Markus

    2016-03-01

    Experimental improvements in post-translational modification (PTM) detection by tandem mass spectrometry (MS/MS) has allowed the identification of vast numbers of PTMs. Open modification searches (OMSs) of MS/MS data, which do not require prior knowledge of the modifications present in the sample, further increased the diversity of detected PTMs. Despite much effort, there is still a lack of functional annotation of PTMs. One possibility to narrow the annotation gap is to mine MS/MS data deposited in public repositories and to correlate the PTM presence with biological meta-information attached to the data. Since the data volume can be quite substantial and contain tens of millions of MS/MS spectra, the data mining tools must be able to cope with big data. Here, we present two tools, Liberator and MzMod, which are built using the MzJava class library and the Apache Spark large scale computing framework. Liberator builds large MS/MS spectrum libraries, and MzMod searches them in an OMS mode. We applied these tools to a recently published set of 25 million spectra from 30 human tissues and present tissue specific PTMs. We also compared the results to the ones obtained with the OMS tool MODa and the search engine X!Tandem. PMID:26653734

  16. Detectability of large-scale power suppression in the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan

    2010-12-01

    Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.

  17. Large-scale sensor systems based on graphene electrolyte-gated field-effect transistors.

    PubMed

    Mackin, Charles; Palacios, Tomás

    2016-04-25

    This work reports a novel graphene electrolyte-gated field-effect transistor (EGFET) array architecture along with a compact, self-contained, and inexpensive measurement system that allows DC characterization of hundreds of graphene EGFETs as a function of VDS and VGS within a matter of minutes. We develop a reliable graphene EGFET fabrication process capable of producing 100% yield for a sample size of 256 devices. Large sample size statistical analysis of graphene EGFET electrical performance is performed for the first time. This work develops a compact piecewise DC model for graphene EGFETs that is shown capable of fitting 87% of IDSvs. VGS curves with a mean percent error of 7% or less. The model is used to extract variations in device parameters such as mobility, contact resistance, minimum carrier concentration, and Dirac point. Correlations in variations are presented. Lastly, this work presents a framework for application-specific optimization of large-scale sensor designs based on graphene EGFETs. PMID:26788552

  18. Imprints of massive primordial fields on large-scale structure

    NASA Astrophysics Data System (ADS)

    Dimastrogiovanni, Emanuela; Fasiello, Matteo; Kamionkowski, Marc

    2016-02-01

    Attention has focussed recently on models of inflation that involve a second or more fields with a mass near the inflationary Hubble parameter H, as may occur in supersymmetric theories if the supersymmetry-breaking scale is not far from H. Quasi-single-field (QsF) inflation is a relatively simple family of phenomenological models that serve as a proxy for theories with additional fields with masses m~ H. Since QsF inflation involves fields in addition to the inflaton, the consistency conditions between correlations that arise in single-clock inflation are not necessarily satisfied. As a result, correlation functions in the squeezed limit may be larger than in single-field inflation. Scalar non-Gaussianities mediated by the massive isocurvature field in QsF have been shown to be potentially observable. These are especially interesting since they would convey information about the mass of the isocurvature field. Here we consider non-Gaussian correlators involving tensor modes and their observational signatures. A physical correlation between a (long-wavelength) tensor mode and two scalar modes (tss), for instance, may give rise to local departures from statistical isotropy or, in other words, a non-trivial four-point function. The presence of the tensor mode may moreover be inferred geometrically from the shape dependence of the four-point function. We compute tss and stt (one soft curvature mode and two hard tensors) bispectra in QsF inflation, identifying the conditions necessary for these to "violate" the consistency relations. We find that while consistency conditions are violated by stt correlations, they are preserved by the tss in the minimal QsF model. Our study of primordial correlators which include gravitons in seeking imprints of additional fields with masses m~ H during inflation can be seen as complementary to the recent ``cosmological collider physics'' proposal.

  19. Structure of CIMS in large-scale continuous manufacturing industry and its optimization strategy

    NASA Astrophysics Data System (ADS)

    Yao, Jianchu; Wang, Gaofeng; Wang, Boxing; Zhou, Ji; Yu, Jun

    1995-08-01

    This paper focuses on the large scale petroleum refinery manufacturing industry and has analyzed the characteristics and functional requirements of CIMS in continuous process industries. Then it compares the continuous manufacturing industry with the discrete manufacturing industry on CIMS conceptual model, and presents the functional model frame and key technologies of CIPS. The paper also proposes the optimization model and solution strategy for the CIMS in continuous industry.

  20. Large-Scale Covariability Between Aerosol and Precipitation Over the 7-SEAS Region: Observations and Simulations

    NASA Technical Reports Server (NTRS)

    Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Zhang, Chidong; Jeong, Myeong Jae; Gautam, Ritesh; Bettenhausen, Corey; Sayer, Andrew M.; Hansell, Richard A.; Liu, Xiaohong; Jiang, Jonathan H.

    2012-01-01

    One of the seven scientific areas of interests of the 7-SEAS field campaign is to evaluate the impact of aerosol on cloud and precipitation (http://7-seas.gsfc.nasa.gov). However, large-scale covariability between aerosol, cloud and precipitation is complicated not only by ambient environment and a variety of aerosol effects, but also by effects from rain washout and climate factors. This study characterizes large-scale aerosol-cloud-precipitation covariability through synergy of long-term multi ]sensor satellite observations with model simulations over the 7-SEAS region [10S-30N, 95E-130E]. Results show that climate factors such as ENSO significantly modulate aerosol and precipitation over the region simultaneously. After removal of climate factor effects, aerosol and precipitation are significantly anti-correlated over the southern part of the region, where high aerosols loading is associated with overall reduced total precipitation with intensified rain rates and decreased rain frequency, decreased tropospheric latent heating, suppressed cloud top height and increased outgoing longwave radiation, enhanced clear-sky shortwave TOA flux but reduced all-sky shortwave TOA flux in deep convective regimes; but such covariability becomes less notable over the northern counterpart of the region where low ]level stratus are found. Using CO as a proxy of biomass burning aerosols to minimize the washout effect, large-scale covariability between CO and precipitation was also investigated and similar large-scale covariability observed. Model simulations with NCAR CAM5 were found to show similar effects to observations in the spatio-temporal patterns. Results from both observations and simulations are valuable for improving our understanding of this region's meteorological system and the roles of aerosol within it. Key words: aerosol; precipitation; large-scale covariability; aerosol effects; washout; climate factors; 7- SEAS; CO; CAM5

  1. Position-dependent power spectrum of the large-scale structure: a novel method to measure the squeezed-limit bispectrum

    SciTech Connect

    Chiang, Chi-Ting; Wagner, Christian; Schmidt, Fabian; Komatsu, Eiichiro E-mail: cwagner@mpa-garching.mpg.de E-mail: komatsu@mpa-garching.mpg.de

    2014-05-01

    The influence of large-scale density fluctuations on structure formation on small scales is described by the three-point correlation function (bispectrum) in the so-called ''squeezed configurations,'' in which one wavenumber, say k{sub 3}, is much smaller than the other two, i.e., k{sub 3} << k{sub 1} ≈ k{sub 2}. This bispectrum is generated by non-linear gravitational evolution and possibly also by inflationary physics. In this paper, we use this fact to show that the bispectrum in the squeezed configurations can be measured without employing three-point function estimators. Specifically, we use the ''position-dependent power spectrum,'' i.e., the power spectrum measured in smaller subvolumes of the survey (or simulation box), and correlate it with the mean overdensity of the corresponding subvolume. This correlation directly measures an integral of the bispectrum dominated by the squeezed configurations. Measuring this correlation is only slightly more complex than measuring the power spectrum itself, and sidesteps the considerable complexity of the full bispectrum estimation. We use cosmological N-body simulations of collisionless particles with Gaussian initial conditions to show that the measured correlation between the position-dependent power spectrum and the long-wavelength overdensity agrees with the theoretical expectation. The position-dependent power spectrum thus provides a new, efficient, and promising way to measure the squeezed-limit bispectrum from large-scale structure observations such as galaxy redshift surveys.

  2. Horizon Run 4 Simulation: Coupled Evolution of Galaxies and Large-Scale Structures of the Universe

    NASA Astrophysics Data System (ADS)

    Kim, Juhan; Park, Changbom; L'Huillier, Benjamin; Hong, Sungwook E.

    2015-08-01

    The Horizon Run 4 is a cosmological N-body simulation designed for the study of coupled evolution between galaxies and large-scale structures of the Universe, and for the test of galaxy formation models. Using 6300^3 gravitating particles in a cubic box of L_{box} = 3150 h^{-1} Mpc, we build a dense forest of halo merger trees to trace the halo merger history with a halo mass resolution scale down to M_s = 2.7 × 10^{11} h^{-1} M_⊙. We build a set of particle and halo data, which can serve as testbeds for comparison of cosmological models and gravitational theories with observations. We find that the FoF halo mass function shows a substantial deviation from the universal form with tangible redshift evolution of amplitude and shape. At higher redshifts, the amplitude of the mass function is lower, and the functional form is shifted toward larger values of ln (1/σ). We also find that the baryonic acoustic oscillation feature in the two-point correlation funct-ion of mock galaxies becomes broader with a peak position moving to smaller scales and the peak amplitude decreasing for increasing directional cosine mu compared to the linear predictions. From the halo merger trees built from halo data at 75 redshifts, we measure the half-mass epoch of halos and find that less massive halos tend to reach half of their current mass at higher redshifts. Simulation outputs including snapshot data, past lightcone space data, and halo merger data are available at http://sdss.kias.re.kr/astro/Horizon-Run4

  3. Spontaneous symmetry breaking in correlated wave functions

    NASA Astrophysics Data System (ADS)

    Kaneko, Ryui; Tocchio, Luca F.; Valenti, Roser; Becca, Federico; Gros, Claudius

    We show that Jastrow-Slater wave functions, in which a density-density Jastrow factor is applied onto an uncorrelated fermionic state, may possess long-range order even when all symmetries are preserved in the wave function. This fact is mainly related to the presence of a sufficiently strong Jastrow term (also including the case of full Gutzwiller projection, suitable for describing spin models). Selected examples are reported, including the spawning of Néel order and dimerization in spin systems, and the stabilization of density and orbital order in itinerant electronic systems

  4. Spontaneous symmetry breaking in correlated wave functions

    NASA Astrophysics Data System (ADS)

    Kaneko, Ryui; Tocchio, Luca F.; Valentí, Roser; Becca, Federico; Gros, Claudius

    2016-03-01

    We show that Jastrow-Slater wave functions, in which a density-density Jastrow factor is applied onto an uncorrelated fermionic state, may possess long-range order even when all symmetries are preserved in the wave function. This fact is mainly related to the presence of a sufficiently strong Jastrow term (also including the case of full Gutzwiller projection, suitable for describing spin models). Selected examples are reported, including the spawning of Néel order and dimerization in spin systems, and the stabilization of charge and orbital order in itinerant electronic systems.

  5. Large-scale seismic waveform quality metric calculation using Hadoop

    NASA Astrophysics Data System (ADS)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely

  6. Large scale nonlinear programming for the optimization of spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Arrieta-Camacho, Juan Jose

    . Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.

  7. Large-scale seismic waveform quality metric calculation using Hadoop

    DOE PAGES

    Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.; Dodge, Douglas A.; Ruppert, Stanley D.

    2016-05-27

    Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will

  8. A Novel Large-Scale Deletion of The Mitochondrial DNA of Spermatozoa of Men in North Iran

    PubMed Central

    Gholinezhad Chari, Maryam; Hosseinzadeh Colagar, Abasalt; Bidmeshkipour, Ali

    2015-01-01

    Background To investigate the level of correlation between large-scale deletions of the mitochondrial DNA (mtDNA) with defective sperm function. Materials and Methods In this analytic study, a total of 25 semen samples of the nor- mozoospermic infertile men from North of Iran were collected from the IVF center in an infertility clinic. The swim-up procedure was performed for the separation of spermatozoa into two groups; (normal motility group and abnormal motility group) by 2.0 ml of Ham’s F-10 medium and 1.0 ml of semen. After total DNA extraction, a long-range polymerase chain reaction (PCR) technique was used to determine the mtDNA deletions in human spermatozoa. Results The products of PCR analysis showed a common 4977 bp deletion and a novel 4866 bp deletion (flanked by a seven-nucleotide direct repeat of 5΄-ACCCCCT-3΄ within the deleted area) from the mtDNA of spermatozoa in both groups. However, the frequency of mtDNA deletions in abnormal motility group was significantly higher than the normal motility group (56, and 24% for 4866 bp-deleted mtDNA and, 52, and 28% for 4977 bp-deleted mtDNA, respectively). Conclusion It is suggested that large-scale deletions of the mtDNA is associated with poor sperm motility and may be a causative factor in the decline of fertility in men. PMID:25780528

  9. Optimization of large-scale pseudotargeted metabolomics method based on liquid chromatography-mass spectrometry.

    PubMed

    Luo, Ping; Yin, Peiyuan; Zhang, Weijian; Zhou, Lina; Lu, Xin; Lin, Xiaohui; Xu, Guowang

    2016-03-11

    Liquid chromatography-mass spectrometry (LC-MS) is now a main stream technique for large-scale metabolic phenotyping to obtain a better understanding of genomic functions. However, repeatability is still an essential issue for the LC-MS based methods, and convincing strategies for long time analysis are urgently required. Our former reported pseudotargeted method which combines nontargeted and targeted analyses, is proved to be a practical approach with high-quality and information-rich data. In this study, we developed a comprehensive strategy based on the pseudotargeted analysis by integrating blank-wash, pooled quality control (QC) sample, and post-calibration for the large-scale metabolomics study. The performance of strategy was optimized from both pre- and post-acquisition sections including the selection of QC samples, insertion frequency of QC samples, and post-calibration methods. These results imply that the pseudotargeted method is rather stable and suitable for large-scale study of metabolic profiling. As a proof of concept, the proposed strategy was applied to the combination of 3 independent batches within a time span of 5 weeks, and generated about 54% of the features with coefficient of variations (CV) below 15%. Moreover, the stability and maximal capability of a single analytical batch could be extended to at least 282 injections (about 110h) while still providing excellent stability, the CV of 63% metabolic features was less than 15%. Taken together, the improved repeatability of our strategy provides a reliable protocol for large-scale metabolomics studies.

  10. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  11. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  12. Electron drift in a large scale solid xenon

    DOE PAGES

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  13. Electron drift in a large scale solid xenon

    SciTech Connect

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  14. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  15. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  16. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  17. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    The overall objective of the work that was conducted was to understand the present-day large-scale deformations of the crust throughout the western United States and in so doing to improve our ability to assess the potential for seismic hazards in this region. To address this problem, we used a large collection of Global Positioning System (GPS) networks which spans the region to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our results can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  18. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  19. Startup of large-scale projects casts spotlight on IGCC

    SciTech Connect

    Swanekamp, R.

    1996-06-01

    With several large-scale plants cranking up this year, integrated coal gasification/combined cycle (IGCC) appears poised for growth. The technology may eventually help coal reclaim its former prominence in new plant construction, but developers worldwide are eyeing other feedstocks--such as petroleum coke or residual oil. Of the so-called advanced clean-coal technologies, integrated (IGCC) appears to be having a defining year. Of three large-scale demonstration plants in the US, one is well into startup, a second is expected to begin operating in the fall, and a third should startup by the end of the year; worldwide, over a dozen more projects are in the works. In Italy, for example, several large projects using petroleum coke or refinery residues as feedstocks are proceeding, apparently on a project-finance basis.

  20. Considerations of large scale impact and the early Earth

    NASA Technical Reports Server (NTRS)

    Grieve, R. A. F.; Parmentier, E. M.

    1985-01-01

    Bodies which have preserved portions of their earliest crust indicate that large scale impact cratering was an important process in early surface and upper crustal evolution. Large impact basins form the basic topographic, tectonic, and stratigraphic framework of the Moon and impact was responsible for the characteristics of the second order gravity field and upper crustal seismic properties. The Earth's crustal evolution during the first 800 my of its history is conjectural. The lack of a very early crust may indicate that thermal and mechanical instabilities resulting from intense mantle convection and/or bombardment inhibited crustal preservation. Whatever the case, the potential effects of large scale impact have to be considered in models of early Earth evolution. Preliminary models of the evolution of a large terrestrial impact basin was derived and discussed in detail.