Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.
Chen, Rong; Nixon, Erika; Herskovits, Edward
2016-04-01
Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.
Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions
NASA Technical Reports Server (NTRS)
Grujic, L. T.; Siljak, D. D.
1973-01-01
The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.
He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z
2013-12-04
Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.
Yoo, Jae Hyun; Kim, Dohyun; Choi, Jeewook; Jeong, Bumseok
2018-04-01
Methylphenidate is a first-line therapeutic option for treating attention-deficit/hyperactivity disorder (ADHD); however, elicited changes on resting-state functional networks (RSFNs) are not well understood. This study investigated the treatment effect of methylphenidate using a variety of RSFN analyses and explored the collaborative influences of treatment-relevant RSFN changes in children with ADHD. Resting-state functional magnetic resonance imaging was acquired from 20 medication-naïve ADHD children before methylphenidate treatment and twelve weeks later. Changes in large-scale functional connectivity were defined using independent component analysis with dual regression and graph theoretical analysis. The amplitude of low frequency fluctuation (ALFF) was measured to investigate local spontaneous activity alteration. Finally, significant findings were recruited to random forest regression to identify the feature subset that best explains symptom improvement. After twelve weeks of methylphenidate administration, large-scale connectivity was increased between the left fronto-parietal RSFN and the left insula cortex and the right fronto-parietal and the brainstem, while the clustering coefficient (CC) of the global network and nodes, the left fronto-parietal, cerebellum, and occipital pole-visual network, were decreased. ALFF was increased in the bilateral superior parietal cortex and decreased in the right inferior fronto-temporal area. The subset of the local and large-scale RSFN changes, including widespread ALFF changes, the CC of the global network and the cerebellum, could explain the 27.1% variance of the ADHD Rating Scale and 13.72% of the Conner's Parent Rating Scale. Our multivariate approach suggests that the neural mechanism of methylphenidate treatment could be associated with alteration of spontaneous activity in the superior parietal cortex or widespread brain regions as well as functional segregation of the large-scale intrinsic functional network.
Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R
2012-01-01
In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.
Development of large-scale functional brain networks in children.
Supekar, Kaustubh; Musen, Mark; Menon, Vinod
2009-07-01
The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y) and 22 young-adults (ages 19-22 y). Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.
Development of Large-Scale Functional Brain Networks in Children
Supekar, Kaustubh; Musen, Mark; Menon, Vinod
2009-01-01
The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7–9 y) and 22 young-adults (ages 19–22 y). Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar “small-world” organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism. PMID:19621066
Predicting protein functions from redundancies in large-scale protein interaction networks
NASA Technical Reports Server (NTRS)
Samanta, Manoj Pratim; Liang, Shoudan
2003-01-01
Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.
Spasojevic, Marko J; Bahlai, Christie A; Bradley, Bethany A; Butterfield, Bradley J; Tuanmu, Mao-Ning; Sistla, Seeta; Wiederholt, Ruscena; Suding, Katharine N
2016-04-01
Understanding the mechanisms underlying ecosystem resilience - why some systems have an irreversible response to disturbances while others recover - is critical for conserving biodiversity and ecosystem function in the face of global change. Despite the widespread acceptance of a positive relationship between biodiversity and resilience, empirical evidence for this relationship remains fairly limited in scope and localized in scale. Assessing resilience at the large landscape and regional scales most relevant to land management and conservation practices has been limited by the ability to measure both diversity and resilience over large spatial scales. Here, we combined tools used in large-scale studies of biodiversity (remote sensing and trait databases) with theoretical advances developed from small-scale experiments to ask whether the functional diversity within a range of woodland and forest ecosystems influences the recovery of productivity after wildfires across the four-corner region of the United States. We additionally asked how environmental variation (topography, macroclimate) across this geographic region influences such resilience, either directly or indirectly via changes in functional diversity. Using path analysis, we found that functional diversity in regeneration traits (fire tolerance, fire resistance, resprout ability) was a stronger predictor of the recovery of productivity after wildfire than the functional diversity of seed mass or species richness. Moreover, slope, elevation, and aspect either directly or indirectly influenced the recovery of productivity, likely via their effect on microclimate, while macroclimate had no direct or indirect effects. Our study provides some of the first direct empirical evidence for functional diversity increasing resilience at large spatial scales. Our approach highlights the power of combining theory based on local-scale studies with tools used in studies at large spatial scales and trait databases to understand pressing environmental issues. © 2015 John Wiley & Sons Ltd.
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
Large-scale structure of randomly jammed spheres
NASA Astrophysics Data System (ADS)
Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio
2017-05-01
We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
NASA Astrophysics Data System (ADS)
Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.
2003-10-01
Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.
Urban forest health monitoring: large-scale assessments in the United States
Anne Buckelew Cumming; Daniel B. Twardus; David J. Nowak
2008-01-01
The U.S. Department of Agriculture, Forest Service (USFS), together with state partners, developed methods to monitor urban forest structure, function, and health at a large statewide scale. Pilot studies have been established in five states using protocols based on USFS Forest Inventory and Analysis and Forest Health Monitoring program data collection standards....
Herbivorous fishes, ecosystem function and mobile links on coral reefs
NASA Astrophysics Data System (ADS)
Welsh, J. Q.; Bellwood, D. R.
2014-06-01
Understanding large-scale movement of ecologically important taxa is key to both species and ecosystem management. Those species responsible for maintaining functional connectivity between habitats are often called mobile links and are regarded as essential elements of resilience. By providing connectivity, they support resilience across spatial scales. Most marine organisms, including fishes, have long-term, biogeographic-scale connectivity through larval movement. Although most reef species are highly site attached after larval settlement, some taxa may also be able to provide rapid, reef-scale connectivity as adults. On coral reefs, the identity of such taxa and the extent of their mobility are not yet known. We use acoustic telemetry to monitor the movements of Kyphosus vaigiensis, one of the few reef fishes that feeds on adult brown macroalgae. Unlike other benthic herbivorous fish species, it also exhibits large-scale (>2 km) movements. Individual K. vaigiensis cover, on average, a 2.5 km length of reef (11 km maximum) each day. These large-scale movements suggest that this species may act as a mobile link, providing functional connectivity, should the need arise, and helping to support functional processes across habitats and spatial scales. An analysis of published studies of home ranges in reef fishes found a consistent relationship between home range size and body length. K. vaigiensis is the sole herbivore to depart significantly from the expected home range-body size relationship, with home range sizes more comparable to exceptionally mobile large pelagic predators rather than other reef herbivores. While the large-scale movements of K. vaigiensis reveal its potential capacity to enhance resilience over large areas, it also emphasizes the potential limitations of small marine reserves to protect some herbivore populations.
Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.
Chen, Mou; Tao, Gang
2016-08-01
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
USDA-ARS?s Scientific Manuscript database
Tomato Functional Genomics Database (TFGD; http://ted.bti.cornell.edu) provides a comprehensive systems biology resource to store, mine, analyze, visualize and integrate large-scale tomato functional genomics datasets. The database is expanded from the previously described Tomato Expression Database...
A large-scale perspective on stress-induced alterations in resting-state networks
NASA Astrophysics Data System (ADS)
Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron
2016-02-01
Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
ERIC Educational Resources Information Center
Bienstein, Pia; Nussbeck, Susanne
2009-01-01
The psychometric properties of a German version of the Questions About Behavioral Function Scale (QABF) (Matson & Vollmer, 1995) were examined in a sample of 522 individuals with intellectual disabilities residing in large facilities participated. The factor structure was first examined by exploratory factor analysis, yielding a…
NASA Astrophysics Data System (ADS)
Yu, Garmay; A, Shvetsov; D, Karelov; D, Lebedev; A, Radulescu; M, Petukhov; V, Isaev-Ivanov
2012-02-01
Based on X-ray crystallographic data available at Protein Data Bank, we have built molecular dynamics (MD) models of homologous recombinases RecA from E. coli and D. radiodurans. Functional form of RecA enzyme, which is known to be a long helical filament, was approximated by a trimer, simulated in periodic water box. The MD trajectories were analyzed in terms of large-scale conformational motions that could be detectable by neutron and X-ray scattering techniques. The analysis revealed that large-scale RecA monomer dynamics can be described in terms of relative motions of 7 subdomains. Motion of C-terminal domain was the major contributor to the overall dynamics of protein. Principal component analysis (PCA) of the MD trajectories in the atom coordinate space showed that rotation of C-domain is correlated with the conformational changes in the central domain and N-terminal domain, that forms the monomer-monomer interface. Thus, even though C-terminal domain is relatively far from the interface, its orientation is correlated with large-scale filament conformation. PCA of the trajectories in the main chain dihedral angle coordinate space implicates a co-existence of a several different large-scale conformations of the modeled trimer. In order to clarify the relationship of independent domain orientation with large-scale filament conformation, we have performed analysis of independent domain motion and its implications on the filament geometry.
Stability of large-scale systems with stable and unstable subsystems.
NASA Technical Reports Server (NTRS)
Grujic, Lj. T.; Siljak, D. D.
1972-01-01
The purpose of this paper is to develop new methods for constructing vector Liapunov functions and broaden the application of Liapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. With minor technical adjustments, the same criterion can be used to determine connective asymptotic stability of large-scale systems subject to structural perturbations. By redefining the constraints imposed on the interconnections among the subsystems, the considered class of systems is broadened in an essential way to include composite systems with unstable subsystems. In this way, the theory is brought substantially closer to reality since stability of all subsystems is no longer a necessary assumption in establishing stability of the overall composite system.
Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar
2017-09-01
Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.
Large-scale gene function analysis with the PANTHER classification system.
Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D
2013-08-01
The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.
NASA Astrophysics Data System (ADS)
Ruiz Simo, I.; Martinez-Consentino, V. L.; Amaro, J. E.; Ruiz Arriola, E.
2018-06-01
We use a recent scaling analysis of the quasielastic electron scattering data from
Scaling of muscle architecture and fiber types in the rat hindlimb.
Eng, Carolyn M; Smallwood, Laura H; Rainiero, Maria Pia; Lahey, Michele; Ward, Samuel R; Lieber, Richard L
2008-07-01
The functional capacity of a muscle is determined by its architecture and metabolic properties. Although extensive analyses of muscle architecture and fiber type have been completed in a large number of muscles in numerous species, there have been few studies that have looked at the interrelationship of these functional parameters among muscles of a single species. Nor have the architectural properties of individual muscles been compared across species to understand scaling. This study examined muscle architecture and fiber type in the rat (Rattus norvegicus) hindlimb to examine each muscle's functional specialization. Discriminant analysis demonstrated that architectural properties are a greater predictor of muscle function (as defined by primary joint action and anti-gravity or non anti-gravity role) than fiber type. Architectural properties were not strictly aligned with fiber type, but when muscles were grouped according to anti-gravity versus non-anti-gravity function there was evidence of functional specialization. Specifically, anti-gravity muscles had a larger percentage of slow fiber type and increased muscle physiological cross-sectional area. Incongruities between a muscle's architecture and fiber type may reflect the variability of functional requirements on single muscles, especially those that cross multiple joints. Additionally, discriminant analysis and scaling of architectural variables in the hindlimb across several mammalian species was used to explore whether any functional patterns could be elucidated within single muscles or across muscle groups. Several muscles deviated from previously described muscle architecture scaling rules and there was large variability within functional groups in how muscles should be scaled with body size. This implies that functional demands placed on muscles across species should be examined on the single muscle level.
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.
2006-01-01
This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cree, Johnathan Vee; Delgado-Frias, Jose
Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configuremore » the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.« less
Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi
2018-06-18
Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.
Polychaete functional diversity in shallow habitats: Shelter from the storm
NASA Astrophysics Data System (ADS)
Wouters, Julia M.; Gusmao, Joao B.; Mattos, Gustavo; Lana, Paulo
2018-05-01
Innovative approaches are needed to help understanding how species diversity is related to the latitudinal gradient at large or small scales. We have applied a novel approach, by combining morphological and biological traits, to assess the relative importance of the large scale latitudinal gradient and regional morphodynamic drivers in shaping the functional diversity of polychaete assemblages in shallow water habitats, from exposed to estuarine sandy beaches. We used literature data on polychaetes from beaches along the southern and southeastern Brazilian coast together with data on beach types, slope, grain size, temperature, salinity, and chlorophyll a concentration. Generalized linear models on the FDis index for functional diversity calculated for each site and a combined RLQ and fourth-corner analysis were used to investigate relationships between functional traits and environmental variables. Functional diversity was not related to the latitudinal gradient but negatively correlated with grain size and beach slope. Functional diversity was highest in flat beaches with small grain size, little wave exposure and enhanced primary production, indicating that small scale morphodynamic conditions are the primary drivers of polychaete functional diversity.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
NASA Astrophysics Data System (ADS)
Witte, M.; Morrison, H.; Jensen, J. B.; Bansemer, A.; Gettelman, A.
2017-12-01
The spatial covariance of cloud and rain water (or in simpler terms, small and large drops, respectively) is an important quantity for accurate prediction of the accretion rate in bulk microphysical parameterizations that account for subgrid variability using assumed probability density functions (pdfs). Past diagnoses of this covariance from remote sensing, in situ measurements and large eddy simulation output have implicitly assumed that the magnitude of the covariance is insensitive to grain size (i.e. horizontal resolution) and averaging length, but this is not the case because both cloud and rain water exhibit scale invariance across a wide range of scales - from tens of centimeters to tens of kilometers in the case of cloud water, a range that we will show is primarily limited by instrumentation and sampling issues. Since the individual variances systematically vary as a function of spatial scale, it should be expected that the covariance follows a similar relationship. In this study, we quantify the scaling properties of cloud and rain water content and their covariability from high frequency in situ aircraft measurements of marine stratocumulus taken over the southeastern Pacific Ocean aboard the NSF/NCAR C-130 during the VOCALS-REx field experiment of October-November 2008. First we confirm that cloud and rain water scale in distinct manners, indicating that there is a statistically and potentially physically significant difference in the spatial structure of the two fields. Next, we demonstrate that the covariance is a strong function of spatial scale, which implies important caveats regarding the ability of limited-area models with domains smaller than a few tens of kilometers across to accurately reproduce the spatial organization of precipitation. Finally, we present preliminary work on the development of a scale-aware parameterization of cloud-rain water subgrid covariability based in multifractal analysis intended for application in large-scale model microphysics schemes.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G
2017-04-07
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.
2017-01-01
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351
Furnham, Nicholas; Dawson, Natalie L; Rahman, Syed A; Thornton, Janet M; Orengo, Christine A
2016-01-29
Enzymes, as biological catalysts, form the basis of all forms of life. How these proteins have evolved their functions remains a fundamental question in biology. Over 100 years of detailed biochemistry studies, combined with the large volumes of sequence and protein structural data now available, means that we are able to perform large-scale analyses to address this question. Using a range of computational tools and resources, we have compiled information on all experimentally annotated changes in enzyme function within 379 structurally defined protein domain superfamilies, linking the changes observed in functions during evolution to changes in reaction chemistry. Many superfamilies show changes in function at some level, although one function often dominates one superfamily. We use quantitative measures of changes in reaction chemistry to reveal the various types of chemical changes occurring during evolution and to exemplify these by detailed examples. Additionally, we use structural information of the enzymes active site to examine how different superfamilies have changed their catalytic machinery during evolution. Some superfamilies have changed the reactions they perform without changing catalytic machinery. In others, large changes of enzyme function, in terms of both overall chemistry and substrate specificity, have been brought about by significant changes in catalytic machinery. Interestingly, in some superfamilies, relatives perform similar functions but with different catalytic machineries. This analysis highlights characteristics of functional evolution across a wide range of superfamilies, providing insights that will be useful in predicting the function of uncharacterised sequences and the design of new synthetic enzymes. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
FGWAS: Functional genome wide association analysis.
Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-10-01
Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.
Large-angle correlations in the cosmic microwave background
NASA Astrophysics Data System (ADS)
Efstathiou, George; Ma, Yin-Zhe; Hanson, Duncan
2010-10-01
It has been argued recently by Copi et al. 2009 that the lack of large angular correlations of the CMB temperature field provides strong evidence against the standard, statistically isotropic, inflationary Lambda cold dark matter (ΛCDM) cosmology. We compare various estimators of the temperature correlation function showing how they depend on assumptions of statistical isotropy and how they perform on the Wilkinson Microwave Anisotropy Probe (WMAP) 5-yr Internal Linear Combination (ILC) maps with and without a sky cut. We show that the low multipole harmonics that determine the large-scale features of the temperature correlation function can be reconstructed accurately from the data that lie outside the sky cuts. The reconstructions are only weakly dependent on the assumed statistical properties of the temperature field. The temperature correlation functions computed from these reconstructions are in good agreement with those computed from the ILC map over the whole sky. We conclude that the large-scale angular correlation function for our realization of the sky is well determined. A Bayesian analysis of the large-scale correlations is presented, which shows that the data cannot exclude the standard ΛCDM model. We discuss the differences between our results and those of Copi et al. Either there exists a violation of statistical isotropy as claimed by Copi et al., or these authors have overestimated the significance of the discrepancy because of a posteriori choices of estimator, statistic and sky cut.
A Functional Model for Management of Large Scale Assessments.
ERIC Educational Resources Information Center
Banta, Trudy W.; And Others
This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…
Resources for Functional Genomics Studies in Drosophila melanogaster
Mohr, Stephanie E.; Hu, Yanhui; Kim, Kevin; Housden, Benjamin E.; Perrimon, Norbert
2014-01-01
Drosophila melanogaster has become a system of choice for functional genomic studies. Many resources, including online databases and software tools, are now available to support design or identification of relevant fly stocks and reagents or analysis and mining of existing functional genomic, transcriptomic, proteomic, etc. datasets. These include large community collections of fly stocks and plasmid clones, “meta” information sites like FlyBase and FlyMine, and an increasing number of more specialized reagents, databases, and online tools. Here, we introduce key resources useful to plan large-scale functional genomics studies in Drosophila and to analyze, integrate, and mine the results of those studies in ways that facilitate identification of highest-confidence results and generation of new hypotheses. We also discuss ways in which existing resources can be used and might be improved and suggest a few areas of future development that would further support large- and small-scale studies in Drosophila and facilitate use of Drosophila information by the research community more generally. PMID:24653003
Transfection microarray and the applications.
Miyake, Masato; Yoshikawa, Tomohiro; Fujita, Satoshi; Miyake, Jun
2009-05-01
Microarray transfection has been extensively studied for high-throughput functional analysis of mammalian cells. However, control of efficiency and reproducibility are the critical issues for practical use. By using solid-phase transfection accelerators and nano-scaffold, we provide a highly efficient and reproducible microarray-transfection device, "transfection microarray". The device would be applied to the limited number of available primary cells and stem cells not only for large-scale functional analysis but also reporter-based time-lapse cellular event analysis.
HRLSim: a high performance spiking neural network simulator for GPGPU clusters.
Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan
2014-02-01
Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.
Large scale rigidity-based flexibility analysis of biomolecules
Streinu, Ileana
2016-01-01
KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583
Functional genomics (FG) screens, using RNAi or CRISPR technology, have become a standard tool for systematic, genome-wide loss-of-function studies for therapeutic target discovery. As in many large-scale assays, however, off-target effects, variable reagents' potency and experimental noise must be accounted for appropriately control for false positives.
A new energy transfer model for turbulent free shear flow
NASA Technical Reports Server (NTRS)
Liou, William W.-W.
1992-01-01
A new model for the energy transfer mechanism in the large-scale turbulent kinetic energy equation is proposed. An estimate of the characteristic length scale of the energy containing large structures is obtained from the wavelength associated with the structures predicted by a weakly nonlinear analysis for turbulent free shear flows. With the inclusion of the proposed energy transfer model, the weakly nonlinear wave models for the turbulent large-scale structures are self-contained and are likely to be independent flow geometries. The model is tested against a plane mixing layer. Reasonably good agreement is achieved. Finally, it is shown by using the Liapunov function method, the balance between the production and the drainage of the kinetic energy of the turbulent large-scale structures is asymptotically stable as their amplitude saturates. The saturation of the wave amplitude provides an alternative indicator for flow self-similarity.
USDA-ARS?s Scientific Manuscript database
Functional annotations of large plant genome projects mostly provide information on gene function and gene families based on the presence of protein domains and gene homology, but not necessarily in association with gene expression or metabolic and regulatory networks. These additional annotations a...
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
Statistical properties of edge plasma turbulence in the Large Helical Device
NASA Astrophysics Data System (ADS)
Dewhurst, J. M.; Hnat, B.; Ohno, N.; Dendy, R. O.; Masuzaki, S.; Morisaki, T.; Komori, A.
2008-09-01
Ion saturation current (Isat) measurements made by three tips of a Langmuir probe array in the Large Helical Device are analysed for two plasma discharges. Absolute moment analysis is used to quantify properties on different temporal scales of the measured signals, which are bursty and intermittent. Strong coherent modes in some datasets are found to distort this analysis and are consequently removed from the time series by applying bandstop filters. Absolute moment analysis of the filtered data reveals two regions of power-law scaling, with the temporal scale τ ≈ 40 µs separating the two regimes. A comparison is made with similar results from the Mega-Amp Spherical Tokamak. The probability density function is studied and a monotonic relationship between connection length and skewness is found. Conditional averaging is used to characterize the average temporal shape of the largest intermittent bursts.
Muthamilarasan, Mehanathan; Venkata Suresh, B.; Pandey, Garima; Kumari, Kajal; Parida, Swarup Kumar; Prasad, Manoj
2014-01-01
Generating genomic resources in terms of molecular markers is imperative in molecular breeding for crop improvement. Though development and application of microsatellite markers in large-scale was reported in the model crop foxtail millet, no such large-scale study was conducted for intron-length polymorphic (ILP) markers. Considering this, we developed 5123 ILP markers, of which 4049 were physically mapped onto 9 chromosomes of foxtail millet. BLAST analysis of 5123 expressed sequence tags (ESTs) suggested the function for ∼71.5% ESTs and grouped them into 5 different functional categories. About 440 selected primer pairs representing the foxtail millet genome and the different functional groups showed high-level of cross-genera amplification at an average of ∼85% in eight millets and five non-millet species. The efficacy of the ILP markers for distinguishing the foxtail millet is demonstrated by observed heterozygosity (0.20) and Nei's average gene diversity (0.22). In silico comparative mapping of physically mapped ILP markers demonstrated substantial percentage of sequence-based orthology and syntenic relationship between foxtail millet chromosomes and sorghum (∼50%), maize (∼46%), rice (∼21%) and Brachypodium (∼21%) chromosomes. Hence, for the first time, we developed large-scale ILP markers in foxtail millet and demonstrated their utility in germplasm characterization, transferability, phylogenetics and comparative mapping studies in millets and bioenergy grass species. PMID:24086082
Network analysis of mesoscale optical recordings to assess regional, functional connectivity.
Lim, Diana H; LeDue, Jeffrey M; Murphy, Timothy H
2015-10-01
With modern optical imaging methods, it is possible to map structural and functional connectivity. Optical imaging studies that aim to describe large-scale neural connectivity often need to handle large and complex datasets. In order to interpret these datasets, new methods for analyzing structural and functional connectivity are being developed. Recently, network analysis, based on graph theory, has been used to describe and quantify brain connectivity in both experimental and clinical studies. We outline how to apply regional, functional network analysis to mesoscale optical imaging using voltage-sensitive-dye imaging and channelrhodopsin-2 stimulation in a mouse model. We include links to sample datasets and an analysis script. The analyses we employ can be applied to other types of fluorescence wide-field imaging, including genetically encoded calcium indicators, to assess network properties. We discuss the benefits and limitations of using network analysis for interpreting optical imaging data and define network properties that may be used to compare across preparations or other manipulations such as animal models of disease.
The large-scale organization of metabolic networks
NASA Astrophysics Data System (ADS)
Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z. N.; Barabási, A.-L.
2000-10-01
In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.
Zhu, Zhou; Ihle, Nathan T; Rejto, Paul A; Zarrinkar, Patrick P
2016-06-13
Genome-scale functional genomic screens across large cell line panels provide a rich resource for discovering tumor vulnerabilities that can lead to the next generation of targeted therapies. Their data analysis typically has focused on identifying genes whose knockdown enhances response in various pre-defined genetic contexts, which are limited by biological complexities as well as the incompleteness of our knowledge. We thus introduce a complementary data mining strategy to identify genes with exceptional sensitivity in subsets, or outlier groups, of cell lines, allowing an unbiased analysis without any a priori assumption about the underlying biology of dependency. Genes with outlier features are strongly and specifically enriched with those known to be associated with cancer and relevant biological processes, despite no a priori knowledge being used to drive the analysis. Identification of exceptional responders (outliers) may not lead only to new candidates for therapeutic intervention, but also tumor indications and response biomarkers for companion precision medicine strategies. Several tumor suppressors have an outlier sensitivity pattern, supporting and generalizing the notion that tumor suppressors can play context-dependent oncogenic roles. The novel application of outlier analysis described here demonstrates a systematic and data-driven analytical strategy to decipher large-scale functional genomic data for oncology target and precision medicine discoveries.
Revealing the Hidden Relationship by Sparse Modules in Complex Networks with a Large-Scale Analysis
Jiao, Qing-Ju; Huang, Yan; Liu, Wei; Wang, Xiao-Fan; Chen, Xiao-Shuang; Shen, Hong-Bin
2013-01-01
One of the remarkable features of networks is module that can provide useful insights into not only network organizations but also functional behaviors between their components. Comprehensive efforts have been devoted to investigating cohesive modules in the past decade. However, it is still not clear whether there are important structural characteristics of the nodes that do not belong to any cohesive module. In order to answer this question, we performed a large-scale analysis on 25 complex networks with different types and scales using our recently developed BTS (bintree seeking) algorithm, which is able to detect both cohesive and sparse modules in the network. Our results reveal that the sparse modules composed by the cohesively isolated nodes widely co-exist with the cohesive modules. Detailed analysis shows that both types of modules provide better characterization for the division of a network into functional units than merely cohesive modules, because the sparse modules possibly re-organize the nodes in the so-called cohesive modules, which lack obvious modular significance, into meaningful groups. Compared with cohesive modules, the sizes of sparse ones are generally smaller. Sparse modules are also found to have preferences in social and biological networks than others. PMID:23762457
Large-scale changes in network interactions as a physiological signature of spatial neglect
Baldassarre, Antonello; Ramsey, Lenny; Hacker, Carl L.; Callejas, Alicia; Astafiev, Serguei V.; Metcalf, Nicholas V.; Zinn, Kristi; Rengachary, Jennifer; Snyder, Abraham Z.; Carter, Alex R.; Shulman, Gordon L.
2014-01-01
The relationship between spontaneous brain activity and behaviour following focal injury is not well understood. Here, we report a large-scale study of resting state functional connectivity MRI and spatial neglect following stroke in a large (n = 84) heterogeneous sample of first-ever stroke patients (within 1–2 weeks). Spatial neglect, which is typically more severe after right than left hemisphere injury, includes deficits of spatial attention and motor actions contralateral to the lesion, and low general attention due to impaired vigilance/arousal. Patients underwent structural and resting state functional MRI scans, and spatial neglect was measured using the Posner spatial cueing task, and Mesulam and Behavioural Inattention Test cancellation tests. A principal component analysis of the behavioural tests revealed a main factor accounting for 34% of variance that captured three correlated behavioural deficits: visual neglect of the contralesional visual field, visuomotor neglect of the contralesional field, and low overall performance. In an independent sample (21 healthy subjects), we defined 10 resting state networks consisting of 169 brain regions: visual-fovea and visual-periphery, sensory-motor, auditory, dorsal attention, ventral attention, language, fronto-parietal control, cingulo-opercular control, and default mode. We correlated the neglect factor score with the strength of resting state functional connectivity within and across the 10 resting state networks. All damaged brain voxels were removed from the functional connectivity:behaviour correlational analysis. We found that the correlated behavioural deficits summarized by the factor score were associated with correlated multi-network patterns of abnormal functional connectivity involving large swaths of cortex. Specifically, dorsal attention and sensory-motor networks showed: (i) reduced interhemispheric functional connectivity; (ii) reduced anti-correlation with fronto-parietal and default mode networks in the right hemisphere; and (iii) increased intrahemispheric connectivity with the basal ganglia. These patterns of functional connectivity:behaviour correlations were stronger in patients with right- as compared to left-hemisphere damage and were independent of lesion volume. Our findings identify large-scale changes in resting state network interactions that are a physiological signature of spatial neglect and may relate to its right hemisphere lateralization. PMID:25367028
Systematic methods for defining coarse-grained maps in large biomolecules.
Zhang, Zhiyong
2015-01-01
Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.
Mejias, Jorge F; Murray, John D; Kennedy, Henry; Wang, Xiao-Jing
2016-11-01
Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions.
Mejias, Jorge F.; Murray, John D.; Kennedy, Henry; Wang, Xiao-Jing
2016-01-01
Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions. PMID:28138530
Preparation of fosmid libraries and functional metagenomic analysis of microbial community DNA.
Martínez, Asunción; Osburne, Marcia S
2013-01-01
One of the most important challenges in contemporary microbial ecology is to assign a functional role to the large number of novel genes discovered through large-scale sequencing of natural microbial communities that lack similarity to genes of known function. Functional screening of metagenomic libraries, that is, screening environmental DNA clones for the ability to confer an activity of interest to a heterologous bacterial host, is a promising approach for bridging the gap between metagenomic DNA sequencing and functional characterization. Here, we describe methods for isolating environmental DNA and constructing metagenomic fosmid libraries, as well as methods for designing and implementing successful functional screens of such libraries. © 2013 Elsevier Inc. All rights reserved.
MIPHENO: Data normalization for high throughput metabolic analysis.
High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...
The Conundrum of Functional Brain Networks: Small-World Efficiency or Fractal Modularity
Gallos, Lazaros K.; Sigman, Mariano; Makse, Hernán A.
2012-01-01
The human brain has been studied at multiple scales, from neurons, circuits, areas with well-defined anatomical and functional boundaries, to large-scale functional networks which mediate coherent cognition. In a recent work, we addressed the problem of the hierarchical organization in the brain through network analysis. Our analysis identified functional brain modules of fractal structure that were inter-connected in a small-world topology. Here, we provide more details on the use of network science tools to elaborate on this behavior. We indicate the importance of using percolation theory to highlight the modular character of the functional brain network. These modules present a fractal, self-similar topology, identified through fractal network methods. When we lower the threshold of correlations to include weaker ties, the network as a whole assumes a small-world character. These weak ties are organized precisely as predicted by theory maximizing information transfer with minimal wiring costs. PMID:22586406
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2016-07-12
We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.
N-point statistics of large-scale structure in the Zel'dovich approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
2014-06-01
Motivated by the results presented in a companion paper, here we give a simple analytical expression for the matter n-point functions in the Zel'dovich approximation (ZA) both in real and in redshift space (including the angular case). We present numerical results for the 2-dimensional redshift-space correlation function, as well as for the equilateral configuration for the real-space 3-point function. We compare those to the tree-level results. Our analysis is easily extendable to include Lagrangian bias, as well as higher-order perturbative corrections to the ZA. The results should be especially useful for modelling probes of large-scale structure in the linear regime,more » such as the Baryon Acoustic Oscillations. We make the numerical code used in this paper freely available.« less
Jang, Min Jee; Nam, Yoonkey
2015-01-01
Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973
Rayapuram, Channabasavangowda; Idänheimo, Niina; Hunter, Kerri; Kimura, Sachie; Merilo, Ebe; Vaattovaara, Aleksia; Oracz, Krystyna; Kaufholdt, David; Pallon, Andres; Anggoro, Damar Tri; Glów, Dawid; Lowe, Jennifer; Zhou, Ji; Mohammadi, Omid; Puukko, Tuomas; Albert, Andreas; Lang, Hans; Ernst, Dieter; Kollist, Hannes; Brosché, Mikael; Durner, Jörg; Borst, Jan Willem; Collinge, David B.; Karpiński, Stanisław; Lyngkjær, Michael F.; Robatzek, Silke; Wrzaczek, Michael; Kangasjärvi, Jaakko
2015-01-01
Cysteine-rich receptor-like kinases (CRKs) are transmembrane proteins characterized by the presence of two domains of unknown function 26 (DUF26) in their ectodomain. The CRKs form one of the largest groups of receptor-like protein kinases in plants, but their biological functions have so far remained largely uncharacterized. We conducted a large-scale phenotyping approach of a nearly complete crk T-DNA insertion line collection showing that CRKs control important aspects of plant development and stress adaptation in response to biotic and abiotic stimuli in a non-redundant fashion. In particular, the analysis of reactive oxygen species (ROS)-related stress responses, such as regulation of the stomatal aperture, suggests that CRKs participate in ROS/redox signalling and sensing. CRKs play general and fine-tuning roles in the regulation of stomatal closure induced by microbial and abiotic cues. Despite their great number and high similarity, large-scale phenotyping identified specific functions in diverse processes for many CRKs and indicated that CRK2 and CRK5 play predominant roles in growth regulation and stress adaptation, respectively. As a whole, the CRKs contribute to specificity in ROS signalling. Individual CRKs control distinct responses in an antagonistic fashion suggesting future potential for using CRKs in genetic approaches to improve plant performance and stress tolerance. PMID:26197346
Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Bowman, Anthony D.; Szaflarski, Jerzy P.; Visscher, Kristina M.
2016-01-01
Psychophysical and neurobiological evidence suggests that central and peripheral vision are specialized for different functions. This specialization of function might be expected to lead to differences in the large-scale functional interactions of early cortical areas that represent central and peripheral visual space. Here, we characterize differences in whole-brain functional connectivity among sectors in primary visual cortex (V1) corresponding to central, near-peripheral, and far-peripheral vision during resting fixation. Importantly, our analyses reveal that eccentricity sectors in V1 have different functional connectivity with non-visual areas associated with large-scale brain networks. Regions associated with the fronto-parietal control network are most strongly connected with central sectors of V1, regions associated with the cingulo-opercular control network are most strongly connected with near-peripheral sectors of V1, and regions associated with the default mode and auditory networks are most strongly connected with far-peripheral sectors of V1. Additional analyses suggest that similar patterns are present during eyes-closed rest. These results suggest that different types of visual information may be prioritized by large-scale brain networks with distinct functional profiles, and provide insights into how the small-scale functional specialization within early visual regions such as V1 relates to the large-scale organization of functionally distinct whole-brain networks. PMID:27554527
Large-scale coupling dynamics of instructed reversal learning.
Mohr, Holger; Wolfensteller, Uta; Ruge, Hannes
2018-02-15
The ability to rapidly learn from others by instruction is an important characteristic of human cognition. A recent study found that the rapid transfer from initial instructions to fluid behavior is supported by changes of functional connectivity between and within several large-scale brain networks, and particularly by the coupling of the dorsal attention network (DAN) with the cingulo-opercular network (CON). In the present study, we extended this approach to investigate how these brain networks interact when stimulus-response mappings are altered by novel instructions. We hypothesized that residual stimulus-response associations from initial practice might negatively impact the ability to implement novel instructions. Using functional imaging and large-scale connectivity analysis, we found that functional coupling between the CON and DAN was generally at a higher level during initial than reversal learning. Examining the learning-related connectivity dynamics between the CON and DAN in more detail by means of multivariate patterns analyses, we identified a specific subset of connections which showed a particularly high increase in connectivity during initial learning compared to reversal learning. This finding suggests that the CON-DAN connections can be separated into two functionally dissociable yet spatially intertwined subsystems supporting different aspects of short-term task automatization. Copyright © 2017 Elsevier Inc. All rights reserved.
Large-scale automated histology in the pursuit of connectomes.
Kleinfeld, David; Bharioke, Arjun; Blinder, Pablo; Bock, Davi D; Briggman, Kevin L; Chklovskii, Dmitri B; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P; Lee, Wei-Chung Allen; Meyer, Hanno S; Micheva, Kristina D; Oberlaender, Marcel; Prohaska, Steffen; Reid, R Clay; Smith, Stephen J; Takemura, Shinya; Tsai, Philbert S; Sakmann, Bert
2011-11-09
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.
Large-Scale Automated Histology in the Pursuit of Connectomes
Bharioke, Arjun; Blinder, Pablo; Bock, Davi D.; Briggman, Kevin L.; Chklovskii, Dmitri B.; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P.; Lee, Wei-Chung Allen; Meyer, Hanno S.; Micheva, Kristina D.; Oberlaender, Marcel; Prohaska, Steffen; Reid, R. Clay; Smith, Stephen J.; Takemura, Shinya; Tsai, Philbert S.; Sakmann, Bert
2011-01-01
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity. PMID:22072665
Development and validation of a measure of pediatric oral health-related quality of life: the POQL
Huntington, Noelle L; Spetter, Dante; Jones, Judith A.; Rich, Sharon E.; Garcia, Raul I.; Spiro, Avron
2011-01-01
Objective To develop a brief measure of oral health-related quality of life in children and demonstrate its reliability and validity in a diverse population. Methods We administered the initial 20-item POQL to children (Child Self-Report) and parents (Parent Report on Child) from diverse populations in both school-based and clinic-based settings. Clinical oral health status was measured on a subset of children. We used factor analysis to determine the underlying scales and then reduced the measure to 10 items based on several considerations. Multitrait analysis on the resulting 10-item POQL was used to reaffirm the discrimination of scales and assess the measure’s internal consistency and interscale correlations. We established discriminant and convergent validity with clinical status, perceived oral health and responses on the PedsQL and determined sensitivity to change with children undergoing ECC surgical repair. Results Factor analysis returned a four-scale solution for the initial items – Physical Functioning, Role Functioning, Social Functioning and Emotional Functioning. The reduced items represented the same four scales – two each on Physical and Role and three each on Social and Emotional. Good reliability and validity were shown for the POQL as a whole and for each of the scales. Conclusions The POQL is a valid and reliable measure of oral health-related quality of life for use in pre-school and school-aged children, with high utility for both clinical assessments and large-scale population studies. PMID:21972458
Development and validation of a measure of pediatric oral health-related quality of life: the POQL.
Huntington, Noelle L; Spetter, Dante; Jones, Judith A; Rich, Sharron E; Garcia, Raul I; Spiro, Avron
2011-01-01
To develop a brief measure of oral health-related quality of life (OHQL) in children and demonstrate its reliability and validity in a diverse population. We administered the initial 20-item Pediatric Oral Health-Related Quality of Life (POQL) to children (Child Self-Report) and parents (Parent Report on Child) from diverse populations in both school-based and clinic-based settings. Clinical oral health status was measured on a subset of children. We used factor analysis to determine the underlying scales and then reduced the measure to 10 items based on several considerations. Multitrait analysis on the resulting 10-item POQL was used to reaffirm the discrimination of scales and assess the measure's internal consistency and interscale correlations. We established discriminant and convergent validity with clinical status, perceived oral health and responses on the PedsQL, and determined sensitivity to change with children undergoing ECC surgical repair. Factor analysis returned a four-scale solution for the initial items--Physical Functioning, Role Functioning, Social Functioning, and Emotional Functioning. The reduced items represented the same four scales--two each on Physical and Role and three each on Social and Emotional. Good reliability and validity were shown for the POQL as a whole and for each of the scales. The POQL is a valid and reliable measure of OHQL for use in preschool and school-aged children, with high utility for both clinical assessments and large-scale population studies.
Spatial-temporal-spectral EEG patterns of BOLD functional network connectivity dynamics
NASA Astrophysics Data System (ADS)
Lamoš, Martin; Mareček, Radek; Slavíček, Tomáš; Mikl, Michal; Rektor, Ivan; Jan, Jiří
2018-06-01
Objective. Growing interest in the examination of large-scale brain network functional connectivity dynamics is accompanied by an effort to find the electrophysiological correlates. The commonly used constraints applied to spatial and spectral domains during electroencephalogram (EEG) data analysis may leave part of the neural activity unrecognized. We propose an approach that blindly reveals multimodal EEG spectral patterns that are related to the dynamics of the BOLD functional network connectivity. Approach. The blind decomposition of EEG spectrogram by parallel factor analysis has been shown to be a useful technique for uncovering patterns of neural activity. The simultaneously acquired BOLD fMRI data were decomposed by independent component analysis. Dynamic functional connectivity was computed on the component’s time series using a sliding window correlation, and between-network connectivity states were then defined based on the values of the correlation coefficients. ANOVA tests were performed to assess the relationships between the dynamics of between-network connectivity states and the fluctuations of EEG spectral patterns. Main results. We found three patterns related to the dynamics of between-network connectivity states. The first pattern has dominant peaks in the alpha, beta, and gamma bands and is related to the dynamics between the auditory, sensorimotor, and attentional networks. The second pattern, with dominant peaks in the theta and low alpha bands, is related to the visual and default mode network. The third pattern, also with peaks in the theta and low alpha bands, is related to the auditory and frontal network. Significance. Our previous findings revealed a relationship between EEG spectral pattern fluctuations and the hemodynamics of large-scale brain networks. In this study, we suggest that the relationship also exists at the level of functional connectivity dynamics among large-scale brain networks when no standard spatial and spectral constraints are applied on the EEG data.
Dynamics of Intersubject Brain Networks during Anxious Anticipation
Najafi, Mahshid; Kinnison, Joshua; Pessoa, Luiz
2017-01-01
How do large-scale brain networks reorganize during the waxing and waning of anxious anticipation? Here, threat was dynamically modulated during human functional MRI as two circles slowly meandered on the screen; if they touched, an unpleasant shock was delivered. We employed intersubject correlation analysis, which allowed the investigation of network-level functional connectivity across brains, and sought to determine how network connectivity changed during periods of approach (circles moving closer) and periods of retreat (circles moving apart). Analysis of positive connection weights revealed that dynamic threat altered connectivity within and between the salience, executive, and task-negative networks. For example, dynamic functional connectivity increased within the salience network during approach and decreased during retreat. The opposite pattern was found for the functional connectivity between the salience and task-negative networks: decreases during approach and increases during approach. Functional connections between subcortical regions and the salience network also changed dynamically during approach and retreat periods. Subcortical regions exhibiting such changes included the putative periaqueductal gray, putative habenula, and putative bed nucleus of the stria terminalis. Additional analysis of negative functional connections revealed dynamic changes, too. For example, negative weights within the salience network decreased during approach and increased during retreat, opposite what was found for positive weights. Together, our findings unraveled dynamic features of functional connectivity of large-scale networks and subcortical regions across participants while threat levels varied continuously, and demonstrate the potential of characterizing emotional processing at the level of dynamic networks. PMID:29209184
Lan, Hui; Carson, Rachel; Provart, Nicholas J; Bonner, Anthony J
2007-09-21
Arabidopsis thaliana is the model species of current plant genomic research with a genome size of 125 Mb and approximately 28,000 genes. The function of half of these genes is currently unknown. The purpose of this study is to infer gene function in Arabidopsis using machine-learning algorithms applied to large-scale gene expression data sets, with the goal of identifying genes that are potentially involved in plant response to abiotic stress. Using in house and publicly available data, we assembled a large set of gene expression measurements for A. thaliana. Using those genes of known function, we first evaluated and compared the ability of basic machine-learning algorithms to predict which genes respond to stress. Predictive accuracy was measured using ROC50 and precision curves derived through cross validation. To improve accuracy, we developed a method for combining these classifiers using a weighted-voting scheme. The combined classifier was then trained on genes of known function and applied to genes of unknown function, identifying genes that potentially respond to stress. Visual evidence corroborating the predictions was obtained using electronic Northern analysis. Three of the predicted genes were chosen for biological validation. Gene knockout experiments confirmed that all three are involved in a variety of stress responses. The biological analysis of one of these genes (At1g16850) is presented here, where it is shown to be necessary for the normal response to temperature and NaCl. Supervised learning methods applied to large-scale gene expression measurements can be used to predict gene function. However, the ability of basic learning methods to predict stress response varies widely and depends heavily on how much dimensionality reduction is used. Our method of combining classifiers can improve the accuracy of such predictions - in this case, predictions of genes involved in stress response in plants - and it effectively chooses the appropriate amount of dimensionality reduction automatically. The method provides a useful means of identifying genes in A. thaliana that potentially respond to stress, and we expect it would be useful in other organisms and for other gene functions.
Scaling within the spectral function approach
NASA Astrophysics Data System (ADS)
Sobczyk, J. E.; Rocco, N.; Lovato, A.; Nieves, J.
2018-03-01
Scaling features of the nuclear electromagnetic response functions unveil aspects of nuclear dynamics that are crucial for interpreting neutrino- and electron-scattering data. In the large momentum-transfer regime, the nucleon-density response function defines a universal scaling function, which is independent of the nature of the probe. In this work, we analyze the nucleon-density response function of 12C, neglecting collective excitations. We employ particle and hole spectral functions obtained within two distinct many-body methods, both widely used to describe electroweak reactions in nuclei. We show that the two approaches provide compatible nucleon-density scaling functions that for large momentum transfers satisfy first-kind scaling. Both methods yield scaling functions characterized by an asymmetric shape, although less pronounced than that of experimental scaling functions. This asymmetry, only mildly affected by final state interactions, is mostly due to nucleon-nucleon correlations, encoded in the continuum component of the hole spectral function.
LSD: Large Survey Database framework
NASA Astrophysics Data System (ADS)
Juric, Mario
2012-09-01
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
paraGSEA: a scalable approach for large-scale gene expression profiling
Peng, Shaoliang; Yang, Shunyun
2017-01-01
Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463
StructRNAfinder: an automated pipeline and web server for RNA families prediction.
Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius
2018-02-17
The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.
diCenzo, George C; Finan, Turlough M
2018-01-01
The rate at which all genes within a bacterial genome can be identified far exceeds the ability to characterize these genes. To assist in associating genes with cellular functions, a large-scale bacterial genome deletion approach can be employed to rapidly screen tens to thousands of genes for desired phenotypes. Here, we provide a detailed protocol for the generation of deletions of large segments of bacterial genomes that relies on the activity of a site-specific recombinase. In this procedure, two recombinase recognition target sequences are introduced into known positions of a bacterial genome through single cross-over plasmid integration. Subsequent expression of the site-specific recombinase mediates recombination between the two target sequences, resulting in the excision of the intervening region and its loss from the genome. We further illustrate how this deletion system can be readily adapted to function as a large-scale in vivo cloning procedure, in which the region excised from the genome is captured as a replicative plasmid. We next provide a procedure for the metabolic analysis of bacterial large-scale genome deletion mutants using the Biolog Phenotype MicroArray™ system. Finally, a pipeline is described, and a sample Matlab script is provided, for the integration of the obtained data with a draft metabolic reconstruction for the refinement of the reactions and gene-protein-reaction relationships in a metabolic reconstruction.
NASA Astrophysics Data System (ADS)
Thorslund, Josefin; Jarsjö, Jerker; Destouni, Georgia
2017-04-01
Wetlands are often considered as nature-based solutions that can provide a multitude of services of great social, economic and environmental value to humankind. The services may include recreation, greenhouse gas sequestration, contaminant retention, coastal protection, groundwater level and soil moisture regulation, flood regulation and biodiversity support. Changes in land-use, water use and climate can all impact wetland functions and occur at scales extending well beyond the local scale of an individual wetland. However, in practical applications, management decisions usually regard and focus on individual wetland sites and local conditions. To understand the potential usefulness and services of wetlands as larger-scale nature-based solutions, e.g. for mitigating negative impacts from large-scale change pressures, one needs to understand the combined function multiple wetlands at the relevant large scales. We here systematically investigate if and to what extent research so far has addressed the large-scale dynamics of landscape systems with multiple wetlands, which are likely to be relevant for understanding impacts of regional to global change. Our investigation regards key changes and impacts of relevance for nature-based solutions, such as large-scale nutrient and pollution retention, flow regulation and coastal protection. Although such large-scale knowledge is still limited, evidence suggests that the aggregated functions and effects of multiple wetlands in the landscape can differ considerably from those observed at individual wetlands. Such scale differences may have important implications for wetland function-effect predictability and management under large-scale change pressures and impacts, such as those of climate change.
Using a Mixture IRT Model to Understand English Learner Performance on Large-Scale Assessments
ERIC Educational Resources Information Center
Shea, Christine A.
2013-01-01
The purpose of this study was to determine whether an eighth grade state-level math assessment contained items that function differentially (DIF) for English Learner students (EL) as compared to English Only students (EO) and if so, what factors might have caused DIF. To determine this, Differential Item Functioning (DIF) analysis was employed.…
Sanzol, Javier
2010-05-14
Gene duplication is central to genome evolution. In plants, genes can be duplicated through small-scale events and large-scale duplications often involving polyploidy. The apple belongs to the subtribe Pyrinae (Rosaceae), a diverse lineage that originated via allopolyploidization. Both small-scale duplications and polyploidy may have been important mechanisms shaping the genome of this species. This study evaluates the gene duplication and polyploidy history of the apple by characterizing duplicated genes in this species using EST data. Overall, 68% of the apple genes were clustered into families with a mean copy-number of 4.6. Analysis of the age distribution of gene duplications supported a continuous mode of small-scale duplications, plus two episodes of large-scale duplicates of vastly different ages. The youngest was consistent with the polyploid origin of the Pyrinae 37-48 MYBP, whereas the older may be related to gamma-triplication; an ancient hexapolyploidization previously characterized in the four sequenced eurosid genomes and basal to the eurosid-asterid divergence. Duplicated genes were studied for functional diversification with an emphasis on young paralogs; those originated during or after the formation of the Pyrinae lineage. Unequal assignment of single-copy genes and gene families to Gene Ontology categories suggested functional bias in the pattern of gene retention of paralogs. Young paralogs related to signal transduction, metabolism, and energy pathways have been preferentially retained. Non-random retention of duplicated genes seems to have mediated the expansion of gene families, some of which may have substantially increased their members after the origin of the Pyrinae. The joint analysis of over-duplicated functional categories and phylogenies, allowed evaluation of the role of both polyploidy and small-scale duplications during this process. Finally, gene expression analysis indicated that 82% of duplicated genes, including 80% of young paralogs, showed uncorrelated expression profiles, suggesting extensive subfunctionalization and a role of gene duplication in the acquisition of novel patterns of gene expression. This study reports a genome-wide analysis of the mode of gene duplication in the apple, and provides evidence for its role in genome functional diversification by characterising three major processes: selective retention of paralogs, amplification of gene families, and changes in gene expression.
Mucci, Armida; Rucci, Paola; Rocca, Paola; Bucci, Paola; Gibertoni, Dino; Merlotti, Eleonora; Galderisi, Silvana; Maj, Mario
2014-10-01
The study aimed to assess the construct validity, internal consistency and factor structure of the Specific Levels of Functioning Scale (SLOF), a multidimensional instrument assessing real life functioning. The study was carried out in 895 Italian people with schizophrenia, all living in the community and attending the outpatient units of 26 university psychiatric clinics and/or community mental health departments. The construct validity of the SLOF was analyzed by means of the multitrait-multimethod approach, using the Personal and Social Performance (PSP) Scale as the gold standard. The factor structure of the SLOF was examined using both an exploratory principal component analysis and a confirmatory factor analysis. The six factors identified using exploratory principal component analysis explained 57.1% of the item variance. The examination of the multitrait-multimethod matrix revealed that the SLOF factors had high correlations with PSP factors measuring the same constructs and low correlations with PSP factors measuring different constructs. The confirmatory factor analysis (CFA) corroborated the 6-factor structure reported in the original validation study. Loadings were all significant and ranged from a minimum of 0.299 to a maximum of 0.803. The CFA model was adequately powered and had satisfactory goodness of fit indices (comparative fit index=0.927, Tucker-Lewis index=0.920 and root mean square error of approximation=0.047, 95% CI 0.045-0.049). The present study confirms, in a large sample of Italian people with schizophrenia living in the community, that the SLOF is a reliable and valid instrument for the assessment of social functioning. It has good construct validity and internal consistency, and a well-defined factor structure. Copyright © 2014 Elsevier B.V. All rights reserved.
Large-Scale Circulation and Climate Variability. Chapter 5
NASA Technical Reports Server (NTRS)
Perlwitz, J.; Knutson, T.; Kossin, J. P.; LeGrande, A. N.
2017-01-01
The causes of regional climate trends cannot be understood without considering the impact of variations in large-scale atmospheric circulation and an assessment of the role of internally generated climate variability. There are contributions to regional climate trends from changes in large-scale latitudinal circulation, which is generally organized into three cells in each hemisphere-Hadley cell, Ferrell cell and Polar cell-and which determines the location of subtropical dry zones and midlatitude jet streams. These circulation cells are expected to shift poleward during warmer periods, which could result in poleward shifts in precipitation patterns, affecting natural ecosystems, agriculture, and water resources. In addition, regional climate can be strongly affected by non-local responses to recurring patterns (or modes) of variability of the atmospheric circulation or the coupled atmosphere-ocean system. These modes of variability represent preferred spatial patterns and their temporal variation. They account for gross features in variance and for teleconnections which describe climate links between geographically separated regions. Modes of variability are often described as a product of a spatial climate pattern and an associated climate index time series that are identified based on statistical methods like Principal Component Analysis (PC analysis), which is also called Empirical Orthogonal Function Analysis (EOF analysis), and cluster analysis.
NASA Astrophysics Data System (ADS)
Athenodorou, Andreas; Boucaud, Philippe; de Soto, Feliciano; Rodríguez-Quintero, José; Zafeiropoulos, Savvas
2018-03-01
We report on an instanton-based analysis of the gluon Green functions in the Landau gauge for low momenta; in particular we use lattice results for αs in the symmetric momentum subtraction scheme (MOM) for large-volume lattice simulations. We have exploited quenched gauge field configurations, Nf = 0, with both Wilson and tree-level Symanzik improved actions, and unquenched ones with Nf = 2 + 1 and Nf = 2 + 1 + 1 dynamical flavors (domain wall and twisted-mass fermions, respectively). We show that the dominance of instanton correlations on the low-momenta gluon Green functions can be applied to the determination of phenomenological parameters of the instanton liquid and, eventually, to a determination of the lattice spacing. We furthermore apply the Gradient Flow to remove short-distance fluctuations. The Gradient Flow gets rid of the QCD scale, ΛQCD, and reveals that the instanton prediction extents to large momenta. For those gauge field configurations free of quantum fluctuations, the direct study of topological charge density shows the appearance of large-scale lumps that can be identified as instantons, giving access to a direct study of the instanton density and size distribution that is compatible with those extracted from the analysis of the Green functions.
The influence of sub-grid scale motions on particle collision in homogeneous isotropic turbulence
NASA Astrophysics Data System (ADS)
Xiong, Yan; Li, Jing; Liu, Zhaohui; Zheng, Chuguang
2018-02-01
The absence of sub-grid scale (SGS) motions leads to severe errors in particle pair dynamics, which represents a great challenge to the large eddy simulation of particle-laden turbulent flow. In order to address this issue, data from direct numerical simulation (DNS) of homogenous isotropic turbulence coupled with Lagrangian particle tracking are used as a benchmark to evaluate the corresponding results of filtered DNS (FDNS). It is found that the filtering process in FDNS will lead to a non-monotonic variation of the particle collision statistics, including radial distribution function, radial relative velocity, and the collision kernel. The peak of radial distribution function shifts to the large-inertia region due to the lack of SGS motions, and the analysis of the local flowstructure characteristic variable at particle position indicates that the most effective interaction scale between particles and fluid eddies is increased in FDNS. Moreover, this scale shifting has an obvious effect on the odd-order moments of the probability density function of radial relative velocity, i.e. the skewness, which exhibits a strong correlation to the variance of radial distribution function in FDNS. As a whole, the radial distribution function, together with radial relative velocity, can compensate the SGS effects for the collision kernel in FDNS when the Stokes number based on the Kolmogorov time scale is greater than 3.0. However, it still leaves considerable errors for { St}_k <3.0.
Low energy peripheral scaling in nucleon-nucleon scattering and uncertainty quantification
NASA Astrophysics Data System (ADS)
Ruiz Simo, I.; Amaro, J. E.; Ruiz Arriola, E.; Navarro Pérez, R.
2018-03-01
We analyze the peripheral structure of the nucleon-nucleon interaction for LAB energies below 350 MeV. To this end we transform the scattering matrix into the impact parameter representation by analyzing the scaled phase shifts (L + 1/2) δ JLS (p) and the scaled mixing parameters (L + 1/2)ɛ JLS (p) in terms of the impact parameter b = (L + 1/2)/p. According to the eikonal approximation, at large angular momentum L these functions should become an universal function of b, independent on L. This allows to discuss in a rather transparent way the role of statistical and systematic uncertainties in the different long range components of the two-body potential. Implications for peripheral waves obtained in chiral perturbation theory interactions to fifth order (N5LO) or from the large body of NN data considered in the SAID partial wave analysis are also drawn from comparing them with other phenomenological high-quality interactions, constructed to fit scattering data as well. We find that both N5LO and SAID peripheral waves disagree more than 5σ with the Granada-2013 statistical analysis, more than 2σ with the 6 statistically equivalent potentials fitting the Granada-2013 database and about 1σ with the historical set of 13 high-quality potentials developed since the 1993 Nijmegen analysis.
PetIGA: A framework for high-performance isogeometric analysis
Dalcin, Lisandro; Collier, Nathaniel; Vignal, Philippe; ...
2016-05-25
We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility ofmore » PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. Lastly, we show strong scaling results on up to 4096 cores, which confirm the suitability of PetIGA for large scale simulations.« less
Studies on combined model based on functional objectives of large scale complex engineering
NASA Astrophysics Data System (ADS)
Yuting, Wang; Jingchun, Feng; Jiabao, Sun
2018-03-01
As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.
Combined heat and power supply using Carnot engines
NASA Astrophysics Data System (ADS)
Horlock, J. H.
The Marshall Report on the thermodynamic and economic feasibility of introducing large scale combined heat and electrical power generation (CHP) into the United Kingdom is summarized. Combinations of reversible power plant (Carnot engines) to meet a given demand of power and heat production are analyzed. The Marshall Report states that fairly large scale CHP plants are an attractive energy saving option for areas of high heat load densities. Analysis shows that for given requirements, the total heat supply and utilization factor are functions of heat output, reservoir supply temperature, temperature of heat rejected to the reservoir, and an intermediate temperature for district heating.
Local loss and spatial homogenization of plant diversity reduce ecosystem multifunctionality.
Hautier, Yann; Isbell, Forest; Borer, Elizabeth T; Seabloom, Eric W; Harpole, W Stanley; Lind, Eric M; MacDougall, Andrew S; Stevens, Carly J; Adler, Peter B; Alberti, Juan; Bakker, Jonathan D; Brudvig, Lars A; Buckley, Yvonne M; Cadotte, Marc; Caldeira, Maria C; Chaneton, Enrique J; Chu, Chengjin; Daleo, Pedro; Dickman, Christopher R; Dwyer, John M; Eskelinen, Anu; Fay, Philip A; Firn, Jennifer; Hagenah, Nicole; Hillebrand, Helmut; Iribarne, Oscar; Kirkman, Kevin P; Knops, Johannes M H; La Pierre, Kimberly J; McCulley, Rebecca L; Morgan, John W; Pärtel, Meelis; Pascual, Jesus; Price, Jodi N; Prober, Suzanne M; Risch, Anita C; Sankaran, Mahesh; Schuetz, Martin; Standish, Rachel J; Virtanen, Risto; Wardle, Glenda M; Yahdjian, Laura; Hector, Andy
2018-01-01
Biodiversity is declining in many local communities while also becoming increasingly homogenized across space. Experiments show that local plant species loss reduces ecosystem functioning and services, but the role of spatial homogenization of community composition and the potential interaction between diversity at different scales in maintaining ecosystem functioning remains unclear, especially when many functions are considered (ecosystem multifunctionality). We present an analysis of eight ecosystem functions measured in 65 grasslands worldwide. We find that more diverse grasslands-those with both species-rich local communities (α-diversity) and large compositional differences among localities (β-diversity)-had higher levels of multifunctionality. Moreover, α- and β-diversity synergistically affected multifunctionality, with higher levels of diversity at one scale amplifying the contribution to ecological functions at the other scale. The identity of species influencing ecosystem functioning differed among functions and across local communities, explaining why more diverse grasslands maintained greater functionality when more functions and localities were considered. These results were robust to variation in environmental drivers. Our findings reveal that plant diversity, at both local and landscape scales, contributes to the maintenance of multiple ecosystem services provided by grasslands. Preserving ecosystem functioning therefore requires conservation of biodiversity both within and among ecological communities.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
Park, Y.; Krause, E.; Dodelson, S.; ...
2016-09-30
The joint analysis of galaxy-galaxy lensing and galaxy clustering is a promising method for inferring the growth function of large scale structure. Our analysis will be carried out on data from the Dark Energy Survey (DES), with its measurements of both the distribution of galaxies and the tangential shears of background galaxies induced by these foreground lenses. We develop a practical approach to modeling the assumptions and systematic effects affecting small scale lensing, which provides halo masses, and large scale galaxy clustering. Introducing parameters that characterize the halo occupation distribution (HOD), photometric redshift uncertainties, and shear measurement errors, we studymore » how external priors on different subsets of these parameters affect our growth constraints. Degeneracies within the HOD model, as well as between the HOD and the growth function, are identified as the dominant source of complication, with other systematic effects sub-dominant. The impact of HOD parameters and their degeneracies necessitate the detailed joint modeling of the galaxy sample that we employ. Finally, we conclude that DES data will provide powerful constraints on the evolution of structure growth in the universe, conservatively/optimistically constraining the growth function to 7.9%/4.8% with its first-year data that covered over 1000 square degrees, and to 3.9%/2.3% with its full five-year data that will survey 5000 square degrees, including both statistical and systematic uncertainties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.; Krause, E.; Dodelson, S.
The joint analysis of galaxy-galaxy lensing and galaxy clustering is a promising method for inferring the growth function of large scale structure. Our analysis will be carried out on data from the Dark Energy Survey (DES), with its measurements of both the distribution of galaxies and the tangential shears of background galaxies induced by these foreground lenses. We develop a practical approach to modeling the assumptions and systematic effects affecting small scale lensing, which provides halo masses, and large scale galaxy clustering. Introducing parameters that characterize the halo occupation distribution (HOD), photometric redshift uncertainties, and shear measurement errors, we studymore » how external priors on different subsets of these parameters affect our growth constraints. Degeneracies within the HOD model, as well as between the HOD and the growth function, are identified as the dominant source of complication, with other systematic effects sub-dominant. The impact of HOD parameters and their degeneracies necessitate the detailed joint modeling of the galaxy sample that we employ. Finally, we conclude that DES data will provide powerful constraints on the evolution of structure growth in the universe, conservatively/optimistically constraining the growth function to 7.9%/4.8% with its first-year data that covered over 1000 square degrees, and to 3.9%/2.3% with its full five-year data that will survey 5000 square degrees, including both statistical and systematic uncertainties.« less
Introduction to bioinformatics.
Can, Tolga
2014-01-01
Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.
Large-scale changes in network interactions as a physiological signature of spatial neglect.
Baldassarre, Antonello; Ramsey, Lenny; Hacker, Carl L; Callejas, Alicia; Astafiev, Serguei V; Metcalf, Nicholas V; Zinn, Kristi; Rengachary, Jennifer; Snyder, Abraham Z; Carter, Alex R; Shulman, Gordon L; Corbetta, Maurizio
2014-12-01
The relationship between spontaneous brain activity and behaviour following focal injury is not well understood. Here, we report a large-scale study of resting state functional connectivity MRI and spatial neglect following stroke in a large (n=84) heterogeneous sample of first-ever stroke patients (within 1-2 weeks). Spatial neglect, which is typically more severe after right than left hemisphere injury, includes deficits of spatial attention and motor actions contralateral to the lesion, and low general attention due to impaired vigilance/arousal. Patients underwent structural and resting state functional MRI scans, and spatial neglect was measured using the Posner spatial cueing task, and Mesulam and Behavioural Inattention Test cancellation tests. A principal component analysis of the behavioural tests revealed a main factor accounting for 34% of variance that captured three correlated behavioural deficits: visual neglect of the contralesional visual field, visuomotor neglect of the contralesional field, and low overall performance. In an independent sample (21 healthy subjects), we defined 10 resting state networks consisting of 169 brain regions: visual-fovea and visual-periphery, sensory-motor, auditory, dorsal attention, ventral attention, language, fronto-parietal control, cingulo-opercular control, and default mode. We correlated the neglect factor score with the strength of resting state functional connectivity within and across the 10 resting state networks. All damaged brain voxels were removed from the functional connectivity:behaviour correlational analysis. We found that the correlated behavioural deficits summarized by the factor score were associated with correlated multi-network patterns of abnormal functional connectivity involving large swaths of cortex. Specifically, dorsal attention and sensory-motor networks showed: (i) reduced interhemispheric functional connectivity; (ii) reduced anti-correlation with fronto-parietal and default mode networks in the right hemisphere; and (iii) increased intrahemispheric connectivity with the basal ganglia. These patterns of functional connectivity:behaviour correlations were stronger in patients with right- as compared to left-hemisphere damage and were independent of lesion volume. Our findings identify large-scale changes in resting state network interactions that are a physiological signature of spatial neglect and may relate to its right hemisphere lateralization. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Research on the self-absorption corrections for PGNAA of large samples
NASA Astrophysics Data System (ADS)
Yang, Jian-Bo; Liu, Zhi; Chang, Kang; Li, Rui
2017-02-01
When a large sample is analysed with the prompt gamma neutron activation analysis (PGNAA) neutron self-shielding and gamma self-absorption affect the accuracy, the correction method for the detection efficiency of the relative H of each element in a large sample is described. The influences of the thickness and density of the cement samples on the H detection efficiency, as well as the impurities Fe2O3 and SiO2 on the prompt γ ray yield for each element in the cement samples, were studied. The phase functions for Ca, Fe, and Si on H with changes in sample thickness and density were provided to avoid complicated procedures for preparing the corresponding density or thickness scale for measuring samples under each density or thickness value and to present a simplified method for the measurement efficiency scale for prompt-gamma neutron activation analysis.
Renosh, P R; Schmitt, Francois G; Loisel, Hubert
2015-01-01
Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099
2015-03-15
A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David
2015-04-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).
Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success
Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.
2013-01-01
The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625
Yang, Haishui; Zang, Yanyan; Yuan, Yongge; Tang, Jianjun; Chen, Xin
2012-04-12
Arbuscular mycorrhizal fungi (AMF) can form obligate symbioses with the vast majority of land plants, and AMF distribution patterns have received increasing attention from researchers. At the local scale, the distribution of AMF is well documented. Studies at large scales, however, are limited because intensive sampling is difficult. Here, we used ITS rDNA sequence metadata obtained from public databases to study the distribution of AMF at continental and global scales. We also used these sequence metadata to investigate whether host plant is the main factor that affects the distribution of AMF at large scales. We defined 305 ITS virtual taxa (ITS-VTs) among all sequences of the Glomeromycota by using a comprehensive maximum likelihood phylogenetic analysis. Each host taxonomic order averaged about 53% specific ITS-VTs, and approximately 60% of the ITS-VTs were host specific. Those ITS-VTs with wide host range showed wide geographic distribution. Most ITS-VTs occurred in only one type of host functional group. The distributions of most ITS-VTs were limited across ecosystem, across continent, across biogeographical realm, and across climatic zone. Non-metric multidimensional scaling analysis (NMDS) showed that AMF community composition differed among functional groups of hosts, and among ecosystem, continent, biogeographical realm, and climatic zone. The Mantel test showed that AMF community composition was significantly correlated with plant community composition among ecosystem, among continent, among biogeographical realm, and among climatic zone. The structural equation modeling (SEM) showed that the effects of ecosystem, continent, biogeographical realm, and climatic zone were mainly indirect on AMF distribution, but plant had strongly direct effects on AMF. The distribution of AMF as indicated by ITS rDNA sequences showed a pattern of high endemism at large scales. This pattern indicates high specificity of AMF for host at different scales (plant taxonomic order and functional group) and high selectivity from host plants for AMF. The effects of ecosystemic, biogeographical, continental and climatic factors on AMF distribution might be mediated by host plants.
Planck 2015 results. XVI. Isotropy and statistics of the CMB
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Akrami, Y.; Aluri, P. K.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Casaponsa, B.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Contreras, D.; Couchot, F.; Coulais, A.; Crill, B. P.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fantaye, Y.; Fergusson, J.; Fernandez-Cobos, R.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kim, J.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Liu, H.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marinucci, D.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mikkelsen, K.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Pant, N.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Rotti, A.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Souradeep, T.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zibin, J. P.; Zonca, A.
2016-09-01
We test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect our studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The "Cold Spot" is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.
Planck 2015 results: XVI. Isotropy and statistics of the CMB
Ade, P. A. R.; Aghanim, N.; Akrami, Y.; ...
2016-09-20
In this paper, we test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect ourmore » studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Finally, where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Akrami, Y.
In this paper, we test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect ourmore » studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Finally, where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.« less
Xu, Jiansong; Potenza, Marc N.; Calhoun, Vince D.; Zhang, Rubin; Yip, Sarah W.; Wall, John T.; Pearlson, Godfrey D.; Worhunsky, Patrick D.; Garrison, Kathleen A.; Moran, Joseph M.
2016-01-01
Functional magnetic resonance imaging (fMRI) studies regularly use univariate general-linear-model-based analyses (GLM). Their findings are often inconsistent across different studies, perhaps because of several fundamental brain properties including functional heterogeneity, balanced excitation and inhibition (E/I), and sparseness of neuronal activities. These properties stipulate heterogeneous neuronal activities in the same voxels and likely limit the sensitivity and specificity of GLM. This paper selectively reviews findings of histological and electrophysiological studies and fMRI spatial independent component analysis (sICA) and reports new findings by applying sICA to two existing datasets. The extant and new findings consistently demonstrate several novel features of brain functional organization not revealed by GLM. They include overlap of large-scale functional networks (FNs) and their concurrent opposite modulations, and no significant modulations in activity of most FNs across the whole brain during any task conditions. These novel features of brain functional organization are highly consistent with the brain’s properties of functional heterogeneity, balanced E/I, and sparseness of neuronal activity, and may help reconcile inconsistent GLM findings. PMID:27592153
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Mantini, D.; Marzetti, L.; Corbetta, M.; Romani, G.L.; Del Gratta, C.
2017-01-01
Two major non-invasive brain mapping techniques, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), have complementary advantages with regard to their spatial and temporal resolution. We propose an approach based on the integration of EEG and fMRI, enabling the EEG temporal dynamics of information processing to be characterized within spatially well-defined fMRI large-scale networks. First, the fMRI data are decomposed into networks by means of spatial independent component analysis (sICA), and those associated with intrinsic activity and/or responding to task performance are selected using information from the related time-courses. Next, the EEG data over all sensors are averaged with respect to event timing, thus calculating event-related potentials (ERPs). The ERPs are subjected to temporal ICA (tICA), and the resulting components are localized with the weighted minimum norm (WMNLS) algorithm using the task-related fMRI networks as priors. Finally, the temporal contribution of each ERP component in the areas belonging to the fMRI large-scale networks is estimated. The proposed approach has been evaluated on visual target detection data. Our results confirm that two different components, commonly observed in EEG when presenting novel and salient stimuli respectively, are related to the neuronal activation in large-scale networks, operating at different latencies and associated with different functional processes. PMID:20052528
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Thakur, Shalabh; Guttman, David S
2016-06-30
Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .
Bioinspired Wood Nanotechnology for Functional Materials.
Berglund, Lars A; Burgert, Ingo
2018-05-01
It is a challenging task to realize the vision of hierarchically structured nanomaterials for large-scale applications. Herein, the biomaterial wood as a large-scale biotemplate for functionalization at multiple scales is discussed, to provide an increased property range to this renewable and CO 2 -storing bioresource, which is available at low cost and in large quantities. The Progress Report reviews the emerging field of functional wood materials in view of the specific features of the structural template and novel nanotechnological approaches for the development of wood-polymer composites and wood-mineral hybrids for advanced property profiles and new functions. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Theoretical Definition of Instructor Role in Computer-Managed Instruction.
ERIC Educational Resources Information Center
McCombs, Barbara L.; Dobrovolny, Jacqueline L.
This report describes the results of a theoretical analysis of the ideal role functions of the Computer Managed Instruction (CMI) instructor. Concepts relevant to instructor behavior are synthesized from both cognitive and operant learning theory perspectives, and the roles allocated to instructors by seven large-scale operational CMI systems are…
To the Cloud! A Grassroots Proposal to Accelerate Brain Science Discovery
Vogelstein, Joshua T.; Mensh, Brett; Hausser, Michael; Spruston, Nelson; Evans, Alan; Kording, Konrad; Amunts, Katrin; Ebell, Christoph; Muller, Jeff; Telefont, Martin; Hill, Sean; Koushika, Sandhya P.; Cali, Corrado; Valdés-Sosa, Pedro Antonio; Littlewood, Peter; Koch, Christof; Saalfeld, Stephan; Kepecs, Adam; Peng, Hanchuan; Halchenko, Yaroslav O.; Kiar, Gregory; Poo, Mu-Ming; Poline, Jean-Baptiste; Milham, Michael P.; Schaffer, Alyssa Picchini; Gidron, Rafi; Okano, Hideyuki; Calhoun, Vince D; Chun, Miyoung; Kleissas, Dean M.; Vogelstein, R. Jacob; Perlman, Eric; Burns, Randal; Huganir, Richard; Miller, Michael I.
2018-01-01
The revolution in neuroscientific data acquisition is creating an analysis challenge. We propose leveraging cloud-computing technologies to enable large-scale neurodata storing, exploring, analyzing, and modeling. This utility will empower scientists globally to generate and test theories of brain function and dysfunction. PMID:27810005
NASA Astrophysics Data System (ADS)
O'Connor, B. L.; Carr, A.; Patton, T.; Hamada, Y.
2011-12-01
The Bureau of Land Management (BLM) and the Department of Energy are preparing a joint programmatic environmental impact statement (PEIS) assessing the potential impacts of utility-scale solar energy development on BLM-administered lands in six southwestern states. One of the alternatives considered in the PEIS involves development within identified solar energy zones (SEZs) that individually cover approximately 10 to 1,000 km2, located primarily in desert valleys of the Basin and Range physiographic region. Land-disturbing activities in these alluvium-filled valleys have the potential to adversely affect ephemeral streams with respect to their hydrologic, geomorphic, and ecologic functions. Regulation and management of ephemeral streams typically falls under the spectrum of federal, state, and local programs, but scientifically based guidelines for protecting ephemeral streams with respect to land-development activities are largely nonexistent. The PEIS analysis attempts to identify critical ephemeral streams by evaluating the integral functions of flood conveyance, sediment transport, groundwater recharge, and supporting ecological habitats. The initial approach to classifying critical ephemeral streams involved identifying large, erosional features using available flood hazards mapping, historical peak discharges, and aerial photographs. This approach identified ephemeral features not suitable for development (based primarily on the likelihood of damaging floods and debris flows) to address flood conveyance and sediment transport functions of ephemeral streams. Groundwater recharge and the maintenance of riparian vegetation and wildlife habitats are other functions of ephemeral streams. These functions are typically associated with headwater reaches rather than large-scale erosional features. Recognizing that integral functions of ephemeral streams occur over a range of spatial scales and are driven by varying climatic-hydrologic events, the PEIS analysis assesses ephemeral streams according to their position in the basin, stream order, and the recurrence intervals of runoff events in the basin. A key constraint on this approach is the lack of high-resolution hydrologic, geomorphic, and ecological data for ephemeral streams in remote desert basins of the southwest United States. Consultation with stakeholders and management agencies is an additional component to assist with our analysis where data limitations exist. Results from these analyses identify critical ephemeral stream reaches to be avoided during development activities based on a mix of quantitative and qualitative measures. Long-term monitoring of these systems is needed to assess the avoidance criteria and to help advance development of the tools needed to help manage and protect the integral functions of ephemeral stream networks in arid environments.
The Angular Correlation Function of Galaxies from Early Sloan Digital Sky Survey Data
NASA Astrophysics Data System (ADS)
Connolly, Andrew J.; Scranton, Ryan; Johnston, David; Dodelson, Scott; Eisenstein, Daniel J.; Frieman, Joshua A.; Gunn, James E.; Hui, Lam; Jain, Bhuvnesh; Kent, Stephen; Loveday, Jon; Nichol, Robert C.; O'Connell, Liam; Postman, Marc; Scoccimarro, Roman; Sheth, Ravi K.; Stebbins, Albert; Strauss, Michael A.; Szalay, Alexander S.; Szapudi, István; Tegmark, Max; Vogeley, Michael S.; Zehavi, Idit; Annis, James; Bahcall, Neta; Brinkmann, J.; Csabai, István; Doi, Mamoru; Fukugita, Masataka; Hennessy, G. S.; Hindsley, Robert; Ichikawa, Takashi; Ivezić, Željko; Kim, Rita S. J.; Knapp, Gillian R.; Kunszt, Peter; Lamb, D. Q.; Lee, Brian C.; Lupton, Robert H.; McKay, Timothy A.; Munn, Jeff; Peoples, John; Pier, Jeff; Rockosi, Constance; Schlegel, David; Stoughton, Christopher; Tucker, Douglas L.; Yanny, Brian; York, Donald G.
2002-11-01
The Sloan Digital Sky Survey is one of the first multicolor photometric and spectroscopic surveys designed to measure the statistical properties of galaxies within the local universe. In this paper we present some of the initial results on the angular two-point correlation function measured from the early SDSS galaxy data. The form of the correlation function, over the magnitude interval 18
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to 'topdog'…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to combined…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis of combined…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis of 'underdog'…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to combined…
NASA Technical Reports Server (NTRS)
Givi, Peyman; Jaberi, Farhad A.
2001-01-01
The basic objective of this work is to assess the influence of gravity on "the compositional and the spatial structures" of transitional and turbulent diffusion flames via large eddy simulation (LES), and direct numerical simulation (DNS). The DNS is conducted for appraisal of the various closures employed in LES, and to study the effect of buoyancy on the small scale flow features. The LES is based on our "filtered mass density function"' (FMDF) model. The novelty of the methodology is that it allows for reliable simulations with inclusion of "realistic physics." It also allows for detailed analysis of the unsteady large scale flow evolution and compositional flame structure which is not usually possible via Reynolds averaged simulations.
Functional network alterations and their structural substrate in drug-resistant epilepsy
Caciagli, Lorenzo; Bernhardt, Boris C.; Hong, Seok-Jun; Bernasconi, Andrea; Bernasconi, Neda
2014-01-01
The advent of MRI has revolutionized the evaluation and management of drug-resistant epilepsy by allowing the detection of the lesion associated with the region that gives rise to seizures. Recent evidence indicates marked chronic alterations in the functional organization of lesional tissue and large-scale cortico-subcortical networks. In this review, we focus on recent methodological developments in functional MRI (fMRI) analysis techniques and their application to the two most common drug-resistant focal epilepsies, i.e., temporal lobe epilepsy related to mesial temporal sclerosis and extra-temporal lobe epilepsy related to focal cortical dysplasia. We put particular emphasis on methodological developments in the analysis of task-free or “resting-state” fMRI to probe the integrity of intrinsic networks on a regional, inter-regional, and connectome-wide level. In temporal lobe epilepsy, these techniques have revealed disrupted connectivity of the ipsilateral mesiotemporal lobe, together with contralateral compensatory reorganization and striking reconfigurations of large-scale networks. In cortical dysplasia, initial observations indicate functional alterations in lesional, peri-lesional, and remote neocortical regions. While future research is needed to critically evaluate the reliability, sensitivity, and specificity, fMRI mapping promises to lend distinct biomarkers for diagnosis, presurgical planning, and outcome prediction. PMID:25565942
The mean density and two-point correlation function for the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1988-01-01
The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample.
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
Vo, T D; Dwyer, G; Szeto, H H
1986-04-01
A relatively powerful and inexpensive microcomputer-based system for the spectral analysis of the EEG is presented. High resolution and speed is achieved with the use of recently available large-scale integrated circuit technology with enhanced functionality (INTEL Math co-processors 8087) which can perform transcendental functions rapidly. The versatility of the system is achieved with a hardware organization that has distributed data acquisition capability performed by the use of a microprocessor-based analog to digital converter with large resident memory (Cyborg ISAAC-2000). Compiled BASIC programs and assembly language subroutines perform on-line or off-line the fast Fourier transform and spectral analysis of the EEG which is stored as soft as well as hard copy. Some results obtained from test application of the entire system in animal studies are presented.
ERIC Educational Resources Information Center
Burstein, Leigh
Two specific methods of analysis in large-scale evaluations are considered: structural equation modeling and selection modeling/analysis of non-equivalent control group designs. Their utility in large-scale educational program evaluation is discussed. The examination of these methodological developments indicates how people (evaluators,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athenodorou, Andreas; Boucaud, Philippe; de Soto, Feliciano
We report on an instanton-based analysis of the gluon Green functions in the Landau gauge for low momenta; in particular we use lattice results for αs in the symmetric momentum subtraction scheme (MOM) for large-volume lattice simulations. We have exploited quenched gauge field configurations, Nf = 0, with both Wilson and tree-level Symanzik improved actions, and unquenched ones with Nf = 2 + 1 and Nf = 2 + 1 + 1 dynamical flavors (domain wall and twisted-mass fermions, respectively).We show that the dominance of instanton correlations on the low-momenta gluon Green functions can be applied to the determination ofmore » phenomenological parameters of the instanton liquid and, eventually, to a determination of the lattice spacing.We furthermore apply the Gradient Flow to remove short-distance fluctuations. The Gradient Flow gets rid of the QCD scale, ΛQCD, and reveals that the instanton prediction extents to large momenta. For those gauge field configurations free of quantum fluctuations, the direct study of topological charge density shows the appearance of large-scale lumps that can be identified as instantons, giving access to a direct study of the instanton density and size distribution that is compatible with those extracted from the analysis of the Green functions.« less
A reference guide for tree analysis and visualization
2010-01-01
The quantities of data obtained by the new high-throughput technologies, such as microarrays or ChIP-Chip arrays, and the large-scale OMICS-approaches, such as genomics, proteomics and transcriptomics, are becoming vast. Sequencing technologies become cheaper and easier to use and, thus, large-scale evolutionary studies towards the origins of life for all species and their evolution becomes more and more challenging. Databases holding information about how data are related and how they are hierarchically organized expand rapidly. Clustering analysis is becoming more and more difficult to be applied on very large amounts of data since the results of these algorithms cannot be efficiently visualized. Most of the available visualization tools that are able to represent such hierarchies, project data in 2D and are lacking often the necessary user friendliness and interactivity. For example, the current phylogenetic tree visualization tools are not able to display easy to understand large scale trees with more than a few thousand nodes. In this study, we review tools that are currently available for the visualization of biological trees and analysis, mainly developed during the last decade. We describe the uniform and standard computer readable formats to represent tree hierarchies and we comment on the functionality and the limitations of these tools. We also discuss on how these tools can be developed further and should become integrated with various data sources. Here we focus on freely available software that offers to the users various tree-representation methodologies for biological data analysis. PMID:20175922
Natural bond orbital analysis in the ONETEP code: applications to large protein systems.
Lee, Louis P; Cole, Daniel J; Payne, Mike C; Skylaris, Chris-Kriton
2013-03-05
First principles electronic structure calculations are typically performed in terms of molecular orbitals (or bands), providing a straightforward theoretical avenue for approximations of increasing sophistication, but do not usually provide any qualitative chemical information about the system. We can derive such information via post-processing using natural bond orbital (NBO) analysis, which produces a chemical picture of bonding in terms of localized Lewis-type bond and lone pair orbitals that we can use to understand molecular structure and interactions. We present NBO analysis of large-scale calculations with the ONETEP linear-scaling density functional theory package, which we have interfaced with the NBO 5 analysis program. In ONETEP calculations involving thousands of atoms, one is typically interested in particular regions of a nanosystem whilst accounting for long-range electronic effects from the entire system. We show that by transforming the Non-orthogonal Generalized Wannier Functions of ONETEP to natural atomic orbitals, NBO analysis can be performed within a localized region in such a way that ensures the results are identical to an analysis on the full system. We demonstrate the capabilities of this approach by performing illustrative studies of large proteins--namely, investigating changes in charge transfer between the heme group of myoglobin and its ligands with increasing system size and between a protein and its explicit solvent, estimating the contribution of electronic delocalization to the stabilization of hydrogen bonds in the binding pocket of a drug-receptor complex, and observing, in situ, the n → π* hyperconjugative interactions between carbonyl groups that stabilize protein backbones. Copyright © 2012 Wiley Periodicals, Inc.
Consciousness, cognition and brain networks: New perspectives.
Aldana, E M; Valverde, J L; Fábregas, N
2016-10-01
A detailed analysis of the literature on consciousness and cognition mechanisms based on the neural networks theory is presented. The immune and inflammatory response to the anesthetic-surgical procedure induces modulation of neuronal plasticity by influencing higher cognitive functions. Anesthetic drugs can cause unconsciousness, producing a functional disruption of cortical and thalamic cortical integration complex. The external and internal perceptions are processed through an intricate network of neural connections, involving the higher nervous activity centers, especially the cerebral cortex. This requires an integrated model, formed by neural networks and their interactions with highly specialized regions, through large-scale networks, which are distributed throughout the brain collecting information flow of these perceptions. Functional and effective connectivity between large-scale networks, are essential for consciousness, unconsciousness and cognition. It is what is called the "human connectome" or map neural networks. Copyright © 2014 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
Weighted and directed interactions in evolving large-scale epileptic brain networks
NASA Astrophysics Data System (ADS)
Dickten, Henning; Porz, Stephan; Elger, Christian E.; Lehnertz, Klaus
2016-10-01
Epilepsy can be regarded as a network phenomenon with functionally and/or structurally aberrant connections in the brain. Over the past years, concepts and methods from network theory substantially contributed to improve the characterization of structure and function of these epileptic networks and thus to advance understanding of the dynamical disease epilepsy. We extend this promising line of research and assess—with high spatial and temporal resolution and using complementary analysis approaches that capture different characteristics of the complex dynamics—both strength and direction of interactions in evolving large-scale epileptic brain networks of 35 patients that suffered from drug-resistant focal seizures with different anatomical onset locations. Despite this heterogeneity, we find that even during the seizure-free interval the seizure onset zone is a brain region that, when averaged over time, exerts strongest directed influences over other brain regions being part of a large-scale network. This crucial role, however, manifested by averaging on the population-sample level only - in more than one third of patients, strongest directed interactions can be observed between brain regions far off the seizure onset zone. This may guide new developments for individualized diagnosis, treatment and control.
Applications of species accumulation curves in large-scale biological data analysis.
Deng, Chao; Daley, Timothy; Smith, Andrew D
2015-09-01
The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.
Applications of species accumulation curves in large-scale biological data analysis
Deng, Chao; Daley, Timothy; Smith, Andrew D
2016-01-01
The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899
Feuermann, Marc; Gaudet, Pascale; Mi, Huaiyu; Lewis, Suzanna E; Thomas, Paul D
2016-01-01
We previously reported a paradigm for large-scale phylogenomic analysis of gene families that takes advantage of the large corpus of experimentally supported Gene Ontology (GO) annotations. This 'GO Phylogenetic Annotation' approach integrates GO annotations from evolutionarily related genes across ∼100 different organisms in the context of a gene family tree, in which curators build an explicit model of the evolution of gene functions. GO Phylogenetic Annotation models the gain and loss of functions in a gene family tree, which is used to infer the functions of uncharacterized (or incompletely characterized) gene products, even for human proteins that are relatively well studied. Here, we report our results from applying this paradigm to two well-characterized cellular processes, apoptosis and autophagy. This revealed several important observations with respect to GO annotations and how they can be used for function inference. Notably, we applied only a small fraction of the experimentally supported GO annotations to infer function in other family members. The majority of other annotations describe indirect effects, phenotypes or results from high throughput experiments. In addition, we show here how feedback from phylogenetic annotation leads to significant improvements in the PANTHER trees, the GO annotations and GO itself. Thus GO phylogenetic annotation both increases the quantity and improves the accuracy of the GO annotations provided to the research community. We expect these phylogenetically based annotations to be of broad use in gene enrichment analysis as well as other applications of GO annotations.Database URL: http://amigo.geneontology.org/amigo. © The Author(s) 2016. Published by Oxford University Press.
Li, Guo Chun; Song, Hua Dong; Li, Qi; Bu, Shu Hai
2017-11-01
In Abies fargesii forests of the giant panda's habitats in Mt. Taibai, the spatial distribution patterns and interspecific associations of main tree species and their spatial associations with the understory flowering Fargesia qinlingensis were analyzed at multiple scales by univariate and bivaria-te O-ring function in point pattern analysis. The results showed that in the A. fargesii forest, the number of A. fargesii was largest but its population structure was in decline. The population of Betula platyphylla was relatively young, with a stable population structure, while the population of B. albo-sinensis declined. The three populations showed aggregated distributions at small scales and gradually showed random distributions with increasing spatial scales. Spatial associations among tree species were mainly showed at small scales and gradually became not spatially associated with increasing scale. A. fargesii and B. platyphylla were positively associated with flowering F. qinlingensis at large and medium scales, whereas B. albo-sinensis showed negatively associated with flowering F. qinlingensis at large and medium scales. The interaction between trees and F. qinlingensis in the habitats of giant panda promoted the dynamic succession and development of forests, which changed the environment of giant panda's habitats in Qinling.
The X-ray luminosity functions of Abell clusters from the Einstein Cluster Survey
NASA Technical Reports Server (NTRS)
Burg, R.; Giacconi, R.; Forman, W.; Jones, C.
1994-01-01
We have derived the present epoch X-ray luminosity function of northern Abell clusters using luminosities from the Einstein Cluster Survey. The sample is sufficiently large that we can determine the luminosity function for each richness class separately with sufficient precision to study and compare the different luminosity functions. We find that, within each richness class, the range of X-ray luminosity is quite large and spans nearly a factor of 25. Characterizing the luminosity function for each richness class with a Schechter function, we find that the characteristic X-ray luminosity, L(sub *), scales with richness class as (L(sub *) varies as N(sub*)(exp gamma), where N(sub *) is the corrected, mean number of galaxies in a richness class, and the best-fitting exponent is gamma = 1.3 +/- 0.4. Finally, our analysis suggests that there is a lower limit to the X-ray luminosity of clusters which is determined by the integrated emission of the cluster member galaxies, and this also scales with richness class. The present sample forms a baseline for testing cosmological evolution of Abell-like clusters when an appropriate high-redshift cluster sample becomes available.
Large-scale functional networks connect differently for processing words and symbol strings.
Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta
2018-01-01
Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Xu, Shou-Ling; Chalkley, Robert J; Maynard, Jason C; Wang, Wenfei; Ni, Weimin; Jiang, Xiaoyue; Shin, Kihye; Cheng, Ling; Savage, Dasha; Hühmer, Andreas F R; Burlingame, Alma L; Wang, Zhi-Yong
2017-02-21
Genetic studies have shown essential functions of O-linked N -acetylglucosamine (O-GlcNAc) modification in plants. However, the proteins and sites subject to this posttranslational modification are largely unknown. Here, we report a large-scale proteomic identification of O-GlcNAc-modified proteins and sites in the model plant Arabidopsis thaliana Using lectin weak affinity chromatography to enrich modified peptides, followed by mass spectrometry, we identified 971 O-GlcNAc-modified peptides belonging to 262 proteins. The modified proteins are involved in cellular regulatory processes, including transcription, translation, epigenetic gene regulation, and signal transduction. Many proteins have functions in developmental and physiological processes specific to plants, such as hormone responses and flower development. Mass spectrometric analysis of phosphopeptides from the same samples showed that a large number of peptides could be modified by either O-GlcNAcylation or phosphorylation, but cooccurrence of the two modifications in the same peptide molecule was rare. Our study generates a snapshot of the O-GlcNAc modification landscape in plants, indicating functions in many cellular regulation pathways and providing a powerful resource for further dissecting these functions at the molecular level.
PinAPL-Py: A comprehensive web-application for the analysis of CRISPR/Cas9 screens.
Spahn, Philipp N; Bath, Tyler; Weiss, Ryan J; Kim, Jihoon; Esko, Jeffrey D; Lewis, Nathan E; Harismendy, Olivier
2017-11-20
Large-scale genetic screens using CRISPR/Cas9 technology have emerged as a major tool for functional genomics. With its increased popularity, experimental biologists frequently acquire large sequencing datasets for which they often do not have an easy analysis option. While a few bioinformatic tools have been developed for this purpose, their utility is still hindered either due to limited functionality or the requirement of bioinformatic expertise. To make sequencing data analysis of CRISPR/Cas9 screens more accessible to a wide range of scientists, we developed a Platform-independent Analysis of Pooled Screens using Python (PinAPL-Py), which is operated as an intuitive web-service. PinAPL-Py implements state-of-the-art tools and statistical models, assembled in a comprehensive workflow covering sequence quality control, automated sgRNA sequence extraction, alignment, sgRNA enrichment/depletion analysis and gene ranking. The workflow is set up to use a variety of popular sgRNA libraries as well as custom libraries that can be easily uploaded. Various analysis options are offered, suitable to analyze a large variety of CRISPR/Cas9 screening experiments. Analysis output includes ranked lists of sgRNAs and genes, and publication-ready plots. PinAPL-Py helps to advance genome-wide screening efforts by combining comprehensive functionality with user-friendly implementation. PinAPL-Py is freely accessible at http://pinapl-py.ucsd.edu with instructions and test datasets.
Robust Long-Range Coordination of Spontaneous Neural Activity in Waking, Sleep and Anesthesia.
Liu, Xiao; Yanagawa, Toru; Leopold, David A; Fujii, Naotaka; Duyn, Jeff H
2015-09-01
Although the emerging field of functional connectomics relies increasingly on the analysis of spontaneous fMRI signal covariation to infer the spatial fingerprint of the brain's large-scale functional networks, the nature of the underlying neuro-electrical activity remains incompletely understood. In part, this lack in understanding owes to the invasiveness of electrophysiological acquisition, the difficulty in their simultaneous recording over large cortical areas, and the absence of fully established methods for unbiased extraction of network information from these data. Here, we demonstrate a novel, data-driven approach to analyze spontaneous signal variations in electrocorticographic (ECoG) recordings from nearly entire hemispheres of macaque monkeys. Based on both broadband analysis and analysis of specific frequency bands, the ECoG signals were found to co-vary in patterns that resembled the fMRI networks reported in previous studies. The extracted patterns were robust against changes in consciousness associated with sleep and anesthesia, despite profound changes in intrinsic characteristics of the raw signals, including their spectral signatures. These results suggest that the spatial organization of large-scale brain networks results from neural activity with a broadband spectral feature and is a core aspect of the brain's physiology that does not depend on the state of consciousness. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents findings from an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents the computer printout of data on the application of discriminant function analysis of…
Large-scale Granger causality analysis on resting-state functional MRI
NASA Astrophysics Data System (ADS)
D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel
2016-03-01
We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)
1993-01-01
Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.
Application of Three Cognitive Diagnosis Models to ESL Reading and Listening Assessments
ERIC Educational Resources Information Center
Lee, Yong-Won; Sawaki, Yasuyo
2009-01-01
The present study investigated the functioning of three psychometric models for cognitive diagnosis--the general diagnostic model, the fusion model, and latent class analysis--when applied to large-scale English as a second language listening and reading comprehension assessments. Data used in this study were scored item responses and incidence…
Debugging and Analysis of Large-Scale Parallel Programs
1989-09-01
Przybylski, T. Riordan , C. Rowen, and D. Van’t Hof, "A CMOS RISC Processor with Integrated System Functions," In Proc. of the 1986 COMPCON. IEEE, March 1986...Sequencers," Communications of the ACM, 22(2):115-123, 1979. 115 [Richardson, 1988] Rick Richardson, "Dhrystone 2.1 Benchmark," Usenet Distribution
A Survey and Analysis of Access Control Architectures for XML Data
2006-03-01
13 4. XML Query Engines ...castle and the drawbridge over the moat. Extending beyond the visual analogy, there are many key components to the protection of information and...technology. While XML’s original intent was to enable large-scale electronic publishing over the internet, its functionality is firmly rooted in its
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
Detectability of large-scale power suppression in the galaxy distribution
NASA Astrophysics Data System (ADS)
Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan
2010-12-01
Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.
Arbitrary-order Hilbert Spectral Analysis and Intermittency in Solar Wind Density Fluctuations
NASA Astrophysics Data System (ADS)
Carbone, Francesco; Sorriso-Valvo, Luca; Alberti, Tommaso; Lepreti, Fabio; Chen, Christopher H. K.; Němeček, Zdenek; Šafránková, Jana
2018-05-01
The properties of inertial- and kinetic-range solar wind turbulence have been investigated with the arbitrary-order Hilbert spectral analysis method, applied to high-resolution density measurements. Due to the small sample size and to the presence of strong nonstationary behavior and large-scale structures, the classical analysis in terms of structure functions may prove to be unsuccessful in detecting the power-law behavior in the inertial range, and may underestimate the scaling exponents. However, the Hilbert spectral method provides an optimal estimation of the scaling exponents, which have been found to be close to those for velocity fluctuations in fully developed hydrodynamic turbulence. At smaller scales, below the proton gyroscale, the system loses its intermittent multiscaling properties and converges to a monofractal process. The resulting scaling exponents, obtained at small scales, are in good agreement with those of classical fractional Brownian motion, indicating a long-term memory in the process, and the absence of correlations around the spectral-break scale. These results provide important constraints on models of kinetic-range turbulence in the solar wind.
A numerical study of the string function using a primitive equation ocean model
NASA Astrophysics Data System (ADS)
Tyler, R. H.; Käse, R.
We use results from a primitive-equation ocean numerical model (SCRUM) to test a theoretical 'string function' formulation put forward by Tyler and Käse in another article in this issue. The string function acts as a stream function for the large-scale potential energy flow under the combined beta and topographic effects. The model results verify that large-scale anomalies propagate along the string function contours with a speed correctly given by the cross-string gradient. For anomalies having a scale similar to the Rossby radius, material rates of change in the layer mass following the string velocity are balanced by material rates of change in relative vorticity following the flow velocity. It is shown that large-amplitude anomalies can be generated when wind stress is resonant with the string function configuration.
Multi-Scale Effects in the Strength of Ceramics
Cook, Robert F.
2016-01-01
Multiple length-scale effects are demonstrated in indentation-strength measurements of a range of ceramic materials under inert and reactive conditions. Meso-scale effects associated with flaw disruption by lateral cracking at large indentation loads are shown to increase strengths above the ideal indentation response. Micro-scale effects associated with toughening by microstructural restraints at small indentation loads are shown to decrease strengths below the ideal response. A combined meso-micro-scale analysis is developed that describes ceramic inert strength behaviors over the complete indentation flaw size range. Nano-scale effects associated with chemical equilibria and crack velocity thresholds are shown to lead to invariant minimum strengths at slow applied stressing rates under reactive conditions. A combined meso-micro-nano-scale analysis is developed that describes the full range of reactive and inert strength behaviors as a function of indentation load and applied stressing rate. Applications of the multi-scale analysis are demonstrated for materials design, materials selection, toughness determination, crack velocity determination, bond-rupture parameter determination, and prediction of reactive strengths. The measurements and analysis provide strong support for the existence of sharp crack tips in ceramics such that the nano-scale mechanisms of discrete bond rupture are separate from the larger scale crack driving force mechanics characterized by continuum-based stress-intensity factors. PMID:27563150
“Guilt by Association” Is the Exception Rather Than the Rule in Gene Networks
Gillis, Jesse; Pavlidis, Paul
2012-01-01
Gene networks are commonly interpreted as encoding functional information in their connections. An extensively validated principle called guilt by association states that genes which are associated or interacting are more likely to share function. Guilt by association provides the central top-down principle for analyzing gene networks in functional terms or assessing their quality in encoding functional information. In this work, we show that functional information within gene networks is typically concentrated in only a very few interactions whose properties cannot be reliably related to the rest of the network. In effect, the apparent encoding of function within networks has been largely driven by outliers whose behaviour cannot even be generalized to individual genes, let alone to the network at large. While experimentalist-driven analysis of interactions may use prior expert knowledge to focus on the small fraction of critically important data, large-scale computational analyses have typically assumed that high-performance cross-validation in a network is due to a generalizable encoding of function. Because we find that gene function is not systemically encoded in networks, but dependent on specific and critical interactions, we conclude it is necessary to focus on the details of how networks encode function and what information computational analyses use to extract functional meaning. We explore a number of consequences of this and find that network structure itself provides clues as to which connections are critical and that systemic properties, such as scale-free-like behaviour, do not map onto the functional connectivity within networks. PMID:22479173
Rest but busy: Aberrant resting-state functional connectivity of triple network model in insomnia.
Dong, Xiaojuan; Qin, Haixia; Wu, Taoyu; Hu, Hua; Liao, Keren; Cheng, Fei; Gao, Dong; Lei, Xu
2018-02-01
One classical hypothesis among many models to explain the etiology and maintenance of insomnia disorder (ID) is hyperarousal. Aberrant functional connectivity among resting-state large-scale brain networks may be the underlying neurological mechanisms of this hypothesis. The aim of current study was to investigate the functional network connectivity (FNC) among large-scale brain networks in patients with insomnia disorder (ID) during resting state. In the present study, the resting-state fMRI was used to evaluate whether patients with ID showed aberrant FNC among dorsal attention network (DAN), frontoparietal control network (FPC), anterior default mode network (aDMN), and posterior default mode network (pDMN) compared with healthy good sleepers (HGSs). The Pearson's correlation analysis was employed to explore whether the abnormal FNC observed in patients with ID was associated with sleep parameters, cognitive and emotional scores, and behavioral performance assessed by questionnaires and tasks. Patients with ID had worse subjective thought control ability measured by Thought Control Ability Questionnaire (TCAQ) and more negative affect than HGSs. Intriguingly, relative to HGSs, patients with ID showed a significant increase in FNC between DAN and FPC, but a significant decrease in FNC between aDMN and pDMN. Exploratory analysis in patients with ID revealed a significantly positive correlation between the DAN-FPC FNC and reaction time (RT) of psychomotor vigilance task (PVT). The current study demonstrated that even during the resting state, the task-activated and task-deactivated large-scale brain networks in insomniacs may still maintain a hyperarousal state, looking quite similar to the pattern in a task condition with external stimuli. Those results support the hyperarousal model of insomnia.
NASA Astrophysics Data System (ADS)
Horion, Stephanie; Ivits, Eva; Verzandvoort, Simone; Fensholt, Rasmus
2017-04-01
Ongoing pressures on European land are manifold with extreme climate events and non-sustainable use of land resources being amongst the most important drivers altering the functioning of the ecosystems. The protection and conservation of European natural capital is one of the key objectives of the 7th Environmental Action Plan (EAP). The EAP stipulates that European land must be managed in a sustainable way by 2020 and the UN Sustainable development goals define a Land Degradation Neutral world as one of the targets. This implies that land degradation (LD) assessment of European ecosystems must be performed repeatedly allowing for the assessment of the current state of LD as well as changes compared to a baseline adopted by the UNCCD for the objective of land degradation neutrality. However, scientifically robust methods are still lacking for large-scale assessment of LD and repeated consistent mapping of the state of terrestrial ecosystems. Historical land degradation assessments based on various methods exist, but methods are generally non-replicable or difficult to apply at continental scale (Allan et al. 2007). The current lack of research methods applicable at large spatial scales is notably caused by the non-robust definition of LD, the scarcity of field data on LD, as well as the complex inter-play of the processes driving LD (Vogt et al., 2011). Moreover, the link between LD and changes in land use (how land use changes relates to change in vegetation productivity and ecosystem functioning) is not straightforward. In this study we used the segmented trend method developed by Horion et al. (2016) for large-scale systematic assessment of hotspots of change in ecosystem functioning in relation to LD. This method alleviates shortcomings of widely used linear trend model that does not account for abrupt change, nor adequately captures the actual changes in ecosystem functioning (de Jong et al. 2013; Horion et al. 2016). Here we present a new methodology for assessing gradual and abrupt changes in ecosystem functioning in Europe. Based on segmented trend analysis of water-use efficiency (WUE) time series, an Ecosystem Change Type (ECT) map was produced over Europe at 1km resolution for the period 1999 to 2013. An analysis of auxiliary data on land use/cover change, drought trends, and soil threats was performed over hotspot areas to better understand the observed changes in ecosystem functioning and their driving mechanisms. The ECT map was validated using the case study sites from the EU-funded RECARE project. Overall, the ECT map accurately highlighted areas characterized by a major change in pathways of ecosystem functioning as well as indicated the type and timing of changes. Allan, R. et al. (2007). Climate and land degradation. Verlag Berlin Heidelberg: Springer. de Jong, R et al. (2013). Remote Sensing, 5, 1117-1133 Horion, S. et al. (2016). Global Change Biology, 22, 2801-2817 Vogt, J. V et al. (2011). Land Degradation & Development, 22: 150-165.
Engineering large-scale agent-based systems with consensus
NASA Technical Reports Server (NTRS)
Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.
1994-01-01
The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.
Spectral fingerprints of large-scale neuronal interactions.
Siegel, Markus; Donner, Tobias H; Engel, Andreas K
2012-01-11
Cognition results from interactions among functionally specialized but widely distributed brain regions; however, neuroscience has so far largely focused on characterizing the function of individual brain regions and neurons therein. Here we discuss recent studies that have instead investigated the interactions between brain regions during cognitive processes by assessing correlations between neuronal oscillations in different regions of the primate cerebral cortex. These studies have opened a new window onto the large-scale circuit mechanisms underlying sensorimotor decision-making and top-down attention. We propose that frequency-specific neuronal correlations in large-scale cortical networks may be 'fingerprints' of canonical neuronal computations underlying cognitive processes.
The connectivity structure, giant strong component and centrality of metabolic networks.
Ma, Hong-Wu; Zeng, An-Ping
2003-07-22
Structural and functional analysis of genome-based large-scale metabolic networks is important for understanding the design principles and regulation of the metabolism at a system level. The metabolic network is conventionally considered to be highly integrated and very complex. A rational reduction of the metabolic network to its core structure and a deeper understanding of its functional modules are important. In this work, we show that the metabolites in a metabolic network are far from fully connected. A connectivity structure consisting of four major subsets of metabolites and reactions, i.e. a fully connected sub-network, a substrate subset, a product subset and an isolated subset is found to exist in metabolic networks of 65 fully sequenced organisms. The largest fully connected part of a metabolic network, called 'the giant strong component (GSC)', represents the most complicated part and the core of the network and has the feature of scale-free networks. The average path length of the whole network is primarily determined by that of the GSC. For most of the organisms, GSC normally contains less than one-third of the nodes of the network. This connectivity structure is very similar to the 'bow-tie' structure of World Wide Web. Our results indicate that the bow-tie structure may be common for large-scale directed networks. More importantly, the uncovered structure feature makes a structural and functional analysis of large-scale metabolic network more amenable. As shown in this work, comparing the closeness centrality of the nodes in the GSC can identify the most central metabolites of a metabolic network. To quantitatively characterize the overall connection structure of the GSC we introduced the term 'overall closeness centralization index (OCCI)'. OCCI correlates well with the average path length of the GSC and is a useful parameter for a system-level comparison of metabolic networks of different organisms. http://genome.gbf.de/bioinformatics/
Decoupling local mechanics from large-scale structure in modular metamaterials.
Yang, Nan; Silverberg, Jesse L
2017-04-04
A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such "inverse design" is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module's design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.
Decoupling local mechanics from large-scale structure in modular metamaterials
NASA Astrophysics Data System (ADS)
Yang, Nan; Silverberg, Jesse L.
2017-04-01
A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.
Cell-free protein synthesis: applications in proteomics and biotechnology.
He, Mingyue
2008-01-01
Protein production is one of the key steps in biotechnology and functional proteomics. Expression of proteins in heterologous hosts (such as in E. coli) is generally lengthy and costly. Cell-free protein synthesis is thus emerging as an attractive alternative. In addition to the simplicity and speed for protein production, cell-free expression allows generation of functional proteins that are difficult to produce by in vivo systems. Recent exploitation of cell-free systems enables novel development of technologies for rapid discovery of proteins with desirable properties from very large libraries. This article reviews the recent development in cell-free systems and their application in the large scale protein analysis.
Scalable nuclear density functional theory with Sky3D
NASA Astrophysics Data System (ADS)
Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin
2018-02-01
In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.
Automated microscopy for high-content RNAi screening
2010-01-01
Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920
NASA Technical Reports Server (NTRS)
Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.
1987-01-01
This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.
Connectome-Wide Phenotypical and Genotypical Associations in Focal Dystonia
Fuertinger, Stefan
2017-01-01
Isolated focal dystonia is a debilitating movement disorder of unknown pathophysiology. Early studies in focal dystonias have pointed to segregated changes in brain activity and connectivity. Only recently has the notion that dystonia pathophysiology may lie in abnormalities of large-scale brain networks appeared in the literature. Here, we outline a novel concept of functional connectome-wide alterations that are linked to dystonia phenotype and genotype. Using a neural community detection strategy and graph theoretical analysis of functional MRI data in human patients with the laryngeal form of dystonia (LD) and healthy controls (both males and females), we identified an abnormally widespread hub formation in LD, which particularly affected the primary sensorimotor and parietal cortices and thalamus. Left thalamic regions formed a delineated functional community that highlighted differences in network topology between LD patients with and without family history of dystonia. Conversely, marked differences in the topological organization of parietal regions were found between phenotypically different forms of LD. The interface between sporadic genotype and adductor phenotype of LD yielded four functional communities that were primarily governed by intramodular hub regions. Conversely, the interface between familial genotype and abductor phenotype was associated with numerous long-range hub nodes and an abnormal integration of left thalamus and basal ganglia. Our findings provide the first comprehensive atlas of functional topology across different phenotypes and genotypes of focal dystonia. As such, this study constitutes an important step toward defining dystonia as a large-scale network disorder, understanding its causative pathophysiology, and identifying disorder-specific markers. SIGNIFICANCE STATEMENT The architecture of the functional connectome in focal dystonia was analyzed in a large population of patients with laryngeal dystonia. Breaking with the empirical concept of dystonia as a basal ganglia disorder, we discovered large-scale alterations of neural communities that are significantly influenced by the disorder's clinical phenotype and genotype. PMID:28674168
NASA Astrophysics Data System (ADS)
McKay, N.
2017-12-01
As timescale increases from years to centuries, the spatial scale of covariability in the climate system is hypothesized to increase as well. Covarying spatial scales are larger for temperature than for hydroclimate, however, both aspects of the climate system show systematic changes on large-spatial scales on orbital to tectonic timescales. The extent to which this phenomenon is evident in temperature and hydroclimate at centennial timescales is largely unknown. Recent syntheses of multidecadal to century-scale variability in hydroclimate during the past 2k in the Arctic, North America, and Australasia show little spatial covariability in hydroclimate during the Common Era. To determine 1) the evidence for systematic relationships between the spatial scale of climate covariability as a function of timescale, and 2) whether century-scale hydroclimate variability deviates from the relationship between spatial covariability and timescale, we quantify this phenomenon during the Common Era by calculating the e-folding distance in large instrumental and paleoclimate datasets. We calculate this metric of spatial covariability, at different timescales (1, 10 and 100-yr), for a large network of temperature and precipitation observations from the Global Historical Climatology Network (n=2447), from v2.0.0 of the PAGES2k temperature database (n=692), and from moisture-sensitive paleoclimate records North America, the Arctic, and the Iso2k project (n = 328). Initial results support the hypothesis that the spatial scale of covariability is larger for temperature, than for precipitation or paleoclimate hydroclimate indicators. Spatially, e-folding distances for temperature are largest at low latitudes and over the ocean. Both instrumental and proxy temperature data show clear evidence for increasing spatial extent as a function of timescale, but this phenomenon is very weak in the hydroclimate data analyzed here. In the proxy hydroclimate data, which are predominantly indicators of effective moisture, e-folding distance increases from annual to decadal timescales, but does not continue to increase to centennial timescales. Future work includes examining additional instrumental and proxy datasets of moisture variability, and extending the analysis to millennial timescales of variability.
Dynamic functional connectivity: Promise, issues, and interpretations
Hutchison, R. Matthew; Womelsdorf, Thilo; Allen, Elena A.; Bandettini, Peter A.; Calhoun, Vince D.; Corbetta, Maurizio; Penna, Stefania Della; Duyn, Jeff H.; Glover, Gary H.; Gonzalez-Castillo, Javier; Handwerker, Daniel A.; Keilholz, Shella; Kiviniemi, Vesa; Leopold, David A.; de Pasquale, Francesco; Sporns, Olaf; Walter, Martin; Chang, Catie
2013-01-01
The brain must dynamically integrate, coordinate, and respond to internal and external stimuli across multiple time scales. Non-invasive measurements of brain activity with fMRI have greatly advanced our understanding of the large-scale functional organization supporting these fundamental features of brain function. Conclusions from previous resting-state fMRI investigations were based upon static descriptions of functional connectivity (FC), and only recently studies have begun to capitalize on the wealth of information contained within the temporal features of spontaneous BOLD FC. Emerging evidence suggests that dynamic FC metrics may index changes in macroscopic neural activity patterns underlying critical aspects of cognition and behavior, though limitations with regard to analysis and interpretation remain. Here, we review recent findings, methodological considerations, neural and behavioral correlates, and future directions in the emerging field of dynamic FC investigations. PMID:23707587
ERIC Educational Resources Information Center
Pietarinen, Janne; Pyhältö, Kirsi; Soini, Tiina
2017-01-01
The study aims to gain a better understanding of the national large-scale curriculum process in terms of the used implementation strategies, the function of the reform, and the curriculum coherence perceived by the stakeholders accountable in constructing the national core curriculum in Finland. A large body of school reform literature has shown…
Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus
2016-01-01
The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach “Task-related Edge Density” (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function. PMID:27341204
Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus
2016-01-01
The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach "Task-related Edge Density" (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function.
2016-06-22
this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.
2017-05-01
GenASiS Basics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision -Version 2 of Basics - makes mostly minor additions to functionality and includes some simplifying name changes.
Functional Advantages of Conserved Intrinsic Disorder in RNA-Binding Proteins.
Varadi, Mihaly; Zsolyomi, Fruzsina; Guharoy, Mainak; Tompa, Peter
2015-01-01
Proteins form large macromolecular assemblies with RNA that govern essential molecular processes. RNA-binding proteins have often been associated with conformational flexibility, yet the extent and functional implications of their intrinsic disorder have never been fully assessed. Here, through large-scale analysis of comprehensive protein sequence and structure datasets we demonstrate the prevalence of intrinsic structural disorder in RNA-binding proteins and domains. We addressed their functionality through a quantitative description of the evolutionary conservation of disordered segments involved in binding, and investigated the structural implications of flexibility in terms of conformational stability and interface formation. We conclude that the functional role of intrinsically disordered protein segments in RNA-binding is two-fold: first, these regions establish extended, conserved electrostatic interfaces with RNAs via induced fit. Second, conformational flexibility enables them to target different RNA partners, providing multi-functionality, while also ensuring specificity. These findings emphasize the functional importance of intrinsically disordered regions in RNA-binding proteins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seljak, Uroš, E-mail: useljak@berkeley.edu
On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less
Goch, Caspar J; Stieltjes, Bram; Henze, Romy; Hering, Jan; Poustka, Luise; Meinzer, Hans-Peter; Maier-Hein, Klaus H
2014-05-01
Diagnosis of autism spectrum disorders (ASD) is difficult, as symptoms vary greatly and are difficult to quantify objectively. Recent work has focused on the assessment of non-invasive diffusion tensor imaging-based biomarkers that reflect the microstructural characteristics of neuronal pathways in the brain. While tractography-based approaches typically analyze specific structures of interest, a graph-based large-scale network analysis of the connectome can yield comprehensive measures of larger-scale architectural patterns in the brain. Commonly applied global network indices, however, do not provide any specificity with respect to functional areas or anatomical structures. Aim of this work was to assess the concept of network centrality as a tool to perform locally specific analysis without disregarding the global network architecture and compare it to other popular network indices. We create connectome networks from fiber tractographies and parcellations of the human brain and compute global network indices as well as local indices for Wernicke's Area, Broca's Area and the Motor Cortex. Our approach was evaluated on 18 children suffering from ASD and 18 typically developed controls using magnetic resonance imaging-based cortical parcellations in combination with diffusion tensor imaging tractography. We show that the network centrality of Wernicke's area is significantly (p<0.001) reduced in ASD, while the motor cortex, which was used as a control region, did not show significant alterations. This could reflect the reduced capacity for comprehension of language in ASD. The betweenness centrality could potentially be an important metric in the development of future diagnostic tools in the clinical context of ASD diagnosis. Our results further demonstrate the applicability of large-scale network analysis tools in the domain of region-specific analysis with a potential application in many different psychological disorders.
Accurate evaluation and analysis of functional genomics data and methods
Greene, Casey S.; Troyanskaya, Olga G.
2016-01-01
The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703
Peroxisome Biogenesis and Function
Kaur, Navneet; Reumann, Sigrun; Hu, Jianping
2009-01-01
Peroxisomes are small and single membrane-delimited organelles that execute numerous metabolic reactions and have pivotal roles in plant growth and development. In recent years, forward and reverse genetic studies along with biochemical and cell biological analyses in Arabidopsis have enabled researchers to identify many peroxisome proteins and elucidate their functions. This review focuses on the advances in our understanding of peroxisome biogenesis and metabolism, and further explores the contribution of large-scale analysis, such as in sillco predictions and proteomics, in augmenting our knowledge of peroxisome function In Arabidopsis. PMID:22303249
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Kauth, R. J.; Thomas, G. S.
1976-01-01
The author has identified the following significant results. A conceptual man machine system framework was created for a large scale agricultural remote sensing system. The system is based on and can grow out of the local recognition mode of LACIE, through a gradual transition wherein computer support functions supplement and replace AI functions. Local proportion estimation functions are broken into two broad classes: (1) organization of the data within the sample segment; and (2) identification of the fields or groups of fields in the sample segment.
Zavaglia, Melissa; Forkert, Nils D.; Cheng, Bastian; Gerloff, Christian; Thomalla, Götz; Hilgetag, Claus C.
2015-01-01
Lesion analysis reveals causal contributions of brain regions to mental functions, aiding the understanding of normal brain function as well as rehabilitation of brain-damaged patients. We applied a novel lesion inference technique based on game theory, Multi-perturbation Shapley value Analysis (MSA), to a large clinical lesion dataset. We used MSA to analyze the lesion patterns of 148 acute stroke patients together with their neurological deficits, as assessed by the National Institutes of Health Stroke Scale (NIHSS). The results revealed regional functional contributions to essential behavioral and cognitive functions as reflected in the NIHSS, particularly by subcortical structures. There were also side specific differences of functional contributions between the right and left hemispheric brain regions which may reflect the dominance of the left hemispheric syndrome aphasia in the NIHSS. Comparison of MSA to established lesion inference methods demonstrated the feasibility of the approach for analyzing clinical data and indicated its capability for objectively inferring functional contributions from multiple injured, potentially interacting sites, at the cost of having to predict the outcome of unknown lesion configurations. The analysis of regional functional contributions to neurological symptoms measured by the NIHSS contributes to the interpretation of this widely used standardized stroke scale in clinical practice as well as clinical trials and provides a first approximation of a ‘map of stroke’. PMID:26448908
Zavaglia, Melissa; Forkert, Nils D; Cheng, Bastian; Gerloff, Christian; Thomalla, Götz; Hilgetag, Claus C
2015-01-01
Lesion analysis reveals causal contributions of brain regions to mental functions, aiding the understanding of normal brain function as well as rehabilitation of brain-damaged patients. We applied a novel lesion inference technique based on game theory, Multi-perturbation Shapley value Analysis (MSA), to a large clinical lesion dataset. We used MSA to analyze the lesion patterns of 148 acute stroke patients together with their neurological deficits, as assessed by the National Institutes of Health Stroke Scale (NIHSS). The results revealed regional functional contributions to essential behavioral and cognitive functions as reflected in the NIHSS, particularly by subcortical structures. There were also side specific differences of functional contributions between the right and left hemispheric brain regions which may reflect the dominance of the left hemispheric syndrome aphasia in the NIHSS. Comparison of MSA to established lesion inference methods demonstrated the feasibility of the approach for analyzing clinical data and indicated its capability for objectively inferring functional contributions from multiple injured, potentially interacting sites, at the cost of having to predict the outcome of unknown lesion configurations. The analysis of regional functional contributions to neurological symptoms measured by the NIHSS contributes to the interpretation of this widely used standardized stroke scale in clinical practice as well as clinical trials and provides a first approximation of a 'map of stroke'.
VizieR Online Data Catalog: GAMA. Stellar mass budget (Moffett+, 2016)
NASA Astrophysics Data System (ADS)
Moffett, A. J.; Lange, R.; Driver, S. P.; Robotham, A. S. G.; Kelvin, L. S.; Alpaslan, M.; Andrews, S. K.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Colless, M.; Davies, L. J. M.; Holwerda, B. W.; Hopkins, A. M.; Kafle, P. R.; Liske, J.; Meyer, M.
2018-04-01
Using the recently expanded Galaxy and Mass Assembly (GAMA) survey phase II visual morphology sample and the large-scale bulge and disc decomposition analysis of Lange et al. (2016MNRAS.462.1470L), we derive new stellar mass function fits to galaxy spheroid and disc populations down to log(M*/Mȯ)=8. (1 data file).
ERIC Educational Resources Information Center
Maddox, Bryan; Zumbo, Bruno D.; Tay-Lim, Brenda; Qu, Demin
2015-01-01
This article explores the potential for ethnographic observations to inform the analysis of test item performance. In 2010, a standardized, large-scale adult literacy assessment took place in Mongolia as part of the United Nations Educational, Scientific and Cultural Organization Literacy Assessment and Monitoring Programme (LAMP). In a novel form…
Preliminary logging analysis system (PLANS): overview.
R.H. Twito; S.E. Reutebuch; R.J. McGaughey; C.N. Mann
1987-01-01
The paper previews a computer-aided design system, PLANS, that is useful for developing timber harvest and road network plans on large-scale topographic maps. Earlier planning techniques are reviewed, and the advantages are explained of using advanced planning systems like PLANS. There is a brief summary of the input, output, and function of each program in the PLANS...
Large-scale systems: Complexity, stability, reliability
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1975-01-01
After showing that a complex dynamic system with a competitive structure has highly reliable stability, a class of noncompetitive dynamic systems for which competitive models can be constructed is defined. It is shown that such a construction is possible in the context of the hierarchic stability analysis. The scheme is based on the comparison principle and vector Liapunov functions.
Differential Item Functioning Analysis for Accommodated versus Nonaccommodated Students
ERIC Educational Resources Information Center
Finch, Holmes; Barton, Karen; Meyer, Patrick
2009-01-01
The No Child Left Behind act resulted in an increased reliance on large-scale standardized tests to assess the progress of individual students as well as schools. In addition, emphasis was placed on including all students in the testing programs as well as those with disabilities. As a result, the role of testing accommodations has become more…
Ubiquitinated Proteome: Ready for Global?*
Shi, Yi; Xu, Ping; Qin, Jun
2011-01-01
Ubiquitin (Ub) is a small and highly conserved protein that can covalently modify protein substrates. Ubiquitination is one of the major post-translational modifications that regulate a broad spectrum of cellular functions. The advancement of mass spectrometers as well as the development of new affinity purification tools has greatly expedited proteome-wide analysis of several post-translational modifications (e.g. phosphorylation, glycosylation, and acetylation). In contrast, large-scale profiling of lysine ubiquitination remains a challenge. Most recently, new Ub affinity reagents such as Ub remnant antibody and tandem Ub binding domains have been developed, allowing for relatively large-scale detection of several hundreds of lysine ubiquitination events in human cells. Here we review different strategies for the identification of ubiquitination site and discuss several issues associated with data analysis. We suggest that careful interpretation and orthogonal confirmation of MS spectra is necessary to minimize false positive assignments by automatic searching algorithms. PMID:21339389
Li, Jian-Feng; Bush, Jenifer; Xiong, Yan; Li, Lei; McCormack, Matthew
2011-01-01
Protein-protein interactions (PPIs) constitute the regulatory network that coordinates diverse cellular functions. There are growing needs in plant research for creating protein interaction maps behind complex cellular processes and at a systems biology level. However, only a few approaches have been successfully used for large-scale surveys of PPIs in plants, each having advantages and disadvantages. Here we present split firefly luciferase complementation (SFLC) as a highly sensitive and noninvasive technique for in planta PPI investigation. In this assay, the separate halves of a firefly luciferase can come into close proximity and transiently restore its catalytic activity only when their fusion partners, namely the two proteins of interest, interact with each other. This assay was conferred with quantitativeness and high throughput potential when the Arabidopsis mesophyll protoplast system and a microplate luminometer were employed for protein expression and luciferase measurement, respectively. Using the SFLC assay, we could monitor the dynamics of rapamycin-induced and ascomycin-disrupted interaction between Arabidopsis FRB and human FKBP proteins in a near real-time manner. As a proof of concept for large-scale PPI survey, we further applied the SFLC assay to testing 132 binary PPIs among 8 auxin response factors (ARFs) and 12 Aux/IAA proteins from Arabidopsis. Our results demonstrated that the SFLC assay is ideal for in vivo quantitative PPI analysis in plant cells and is particularly powerful for large-scale binary PPI screens.
Gleadall, Andrew; Pan, Jingzhe; Ding, Lifeng; Kruft, Marc-Anton; Curcó, David
2015-11-01
Molecular dynamics (MD) simulations are widely used to analyse materials at the atomic scale. However, MD has high computational demands, which may inhibit its use for simulations of structures involving large numbers of atoms such as amorphous polymer structures. An atomic-scale finite element method (AFEM) is presented in this study with significantly lower computational demands than MD. Due to the reduced computational demands, AFEM is suitable for the analysis of Young's modulus of amorphous polymer structures. This is of particular interest when studying the degradation of bioresorbable polymers, which is the topic of an accompanying paper. AFEM is derived from the inter-atomic potential energy functions of an MD force field. The nonlinear MD functions were adapted to enable static linear analysis. Finite element formulations were derived to represent interatomic potential energy functions between two, three and four atoms. Validation of the AFEM was conducted through its application to atomic structures for crystalline and amorphous poly(lactide). Copyright © 2015 Elsevier Ltd. All rights reserved.
Evaluation of the reliability and validity for X16 balance testing scale for the elderly.
Ju, Jingjuan; Jiang, Yu; Zhou, Peng; Li, Lin; Ye, Xiaolei; Wu, Hongmei; Shen, Bin; Zhang, Jialei; He, Xiaoding; Niu, Chunjin; Xia, Qinghua
2018-05-10
Balance performance is considered as an indicator of functional status in the elderly, a large scale population screening and evaluation in the community context followed by proper interventions would be of great significance at public health level. However, there has been no suitable balance testing scale available for large scale studies in the unique community context of urban China. A balance scale named X16 balance testing scale was developed, which was composed of 3 domains and 16 items. A total of 1985 functionally independent and active community-dwelling elderly adults' balance abilities were tested using the X16 scale. The internal consistency, split-half reliability, content validity, construct validity, discriminant validity of X16 balance testing scale were evaluated. Factor analysis was performed to identify alternative factor structure. The Eigenvalues of factors 1, 2, and 3 were 8.53, 1.79, and 1.21, respectively, and their cumulative contribution to the total variance reached 72.0%. These 3 factors mainly represented domains static balance, postural stability, and dynamic balance. The Cronbach alpha coefficient for the scale was 0.933. The Spearman correlation coefficients between items and its corresponding domains were ranged from 0.538 to 0.964. The correlation coefficients between each item and its corresponding domain were higher than the coefficients between this item and other domains. With the increase of age, the scores of balance performance, domains static balance, postural stability, and dynamic balance in the elderly declined gradually (P < 0.001). With the increase of age, the proportion of the elderly with intact balance performance decreased gradually (P < 0.001). The reliability and validity of the X16 balance testing scale is both adequate and acceptable. Due to its simple and quick use features, it is practical to be used repeatedly and routinely especially in community setting and on large scale screening.
Efficient population-scale variant analysis and prioritization with VAPr.
Birmingham, Amanda; Mark, Adam M; Mazzaferro, Carlo; Xu, Guorong; Fisch, Kathleen M
2018-04-06
With the growing availability of population-scale whole-exome and whole-genome sequencing, demand for reproducible, scalable variant analysis has spread within genomic research communities. To address this need, we introduce the Python package VAPr (Variant Analysis and Prioritization). VAPr leverages existing annotation tools ANNOVAR and MyVariant.info with MongoDB-based flexible storage and filtering functionality. It offers biologists and bioinformatics generalists easy-to-use and scalable analysis and prioritization of genomic variants from large cohort studies. VAPr is developed in Python and is available for free use and extension under the MIT License. An install package is available on PyPi at https://pypi.python.org/pypi/VAPr, while source code and extensive documentation are on GitHub at https://github.com/ucsd-ccbb/VAPr. kfisch@ucsd.edu.
Best, Michael W; Grossman, Michael; Oyewumi, L Kola; Bowie, Christopher R
2016-04-01
We examined the factor structure of the Positive and Negative Syndrome Scale (PANSS) in early-episode psychosis and its relationships with functioning at baseline and follow-up. A total of 240 consecutive admissions to an early intervention in psychosis clinic were assessed at intake to the program with the PANSS, Global Assessment of Functioning (GAF) and Social and Occupational Functioning Assessment Scale (SOFAS). Seventy individuals were reassessed at follow-up. A maximum likelihood factor analysis was conducted on baseline PANSS scores and the ability of each factor to predict baseline and follow-up GAF and SOFAS was examined. A five-factor model with varimax rotation was the best fit to our data and was largely congruent with factors found previously. The negative symptom factor was the best predictor of GAF and SOFAS at baseline and follow-up. Negative symptoms are the best symptomatic predictor of functioning in individuals with early psychosis and are an important treatment target to improve recovery. © 2014 Wiley Publishing Asia Pty Ltd.
Questionnaire-based assessment of executive functioning: Psychometrics.
Castellanos, Irina; Kronenberger, William G; Pisoni, David B
2018-01-01
The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.
Annealed Scaling for a Charged Polymer
NASA Astrophysics Data System (ADS)
Caravenna, F.; den Hollander, F.; Pétrélis, N.; Poisat, J.
2016-03-01
This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems. What happens for the quenched free energy per monomer remains open. We state two modest results and raise a few questions.
An operational global-scale ocean thermal analysis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.
1990-04-01
The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less
Statistical Analysis of Large-Scale Structure of Universe
NASA Astrophysics Data System (ADS)
Tugay, A. V.
While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.
Small Scale Response and Modeling of Periodically Forced Turbulence
NASA Technical Reports Server (NTRS)
Bos, Wouter; Clark, Timothy T.; Rubinstein, Robert
2007-01-01
The response of the small scales of isotropic turbulence to periodic large scale forcing is studied using two-point closures. The frequency response of the turbulent kinetic energy and dissipation rate, and the phase shifts between production, energy and dissipation are determined as functions of Reynolds number. It is observed that the amplitude and phase of the dissipation exhibit nontrivial frequency and Reynolds number dependence that reveals a filtering effect of the energy cascade. Perturbation analysis is applied to understand this behavior which is shown to depend on distant interactions between widely separated scales of motion. Finally, the extent to which finite dimensional models (standard two-equation models and various generalizations) can reproduce the observed behavior is discussed.
ERIC Educational Resources Information Center
Lee, HyeSun; Geisinger, Kurt F.
2016-01-01
The current study investigated the impact of matching criterion purification on the accuracy of differential item functioning (DIF) detection in large-scale assessments. The three matching approaches for DIF analyses (block-level matching, pooled booklet matching, and equated pooled booklet matching) were employed with the Mantel-Haenszel…
Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš
2016-01-01
Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis-related components and tumor-suppressor genes, suggesting that this combinatorial perturbation may lead to a better target for decreasing cell proliferation and inducing apoptosis. Finally, our approach shows a potential to identify and prioritize therapeutic targets through systemic perturbation analysis of large-scale computational models of signal transduction. Although some components of the presented computational results have been validated against independent gene expression data sets, more laboratory experiments are warranted to more comprehensively validate the presented results. PMID:26904540
EvoluCode: Evolutionary Barcodes as a Unifying Framework for Multilevel Evolutionary Data.
Linard, Benjamin; Nguyen, Ngoc Hoan; Prosdocimi, Francisco; Poch, Olivier; Thompson, Julie D
2012-01-01
Evolutionary systems biology aims to uncover the general trends and principles governing the evolution of biological networks. An essential part of this process is the reconstruction and analysis of the evolutionary histories of these complex, dynamic networks. Unfortunately, the methodologies for representing and exploiting such complex evolutionary histories in large scale studies are currently limited. Here, we propose a new formalism, called EvoluCode (Evolutionary barCode), which allows the integration of different evolutionary parameters (eg, sequence conservation, orthology, synteny …) in a unifying format and facilitates the multilevel analysis and visualization of complex evolutionary histories at the genome scale. The advantages of the approach are demonstrated by constructing barcodes representing the evolution of the complete human proteome. Two large-scale studies are then described: (i) the mapping and visualization of the barcodes on the human chromosomes and (ii) automatic clustering of the barcodes to highlight protein subsets sharing similar evolutionary histories and their functional analysis. The methodologies developed here open the way to the efficient application of other data mining and knowledge extraction techniques in evolutionary systems biology studies. A database containing all EvoluCode data is available at: http://lbgi.igbmc.fr/barcodes.
NASA Astrophysics Data System (ADS)
Okumura, Teppei; Takada, Masahiro; More, Surhud; Masaki, Shogo
2017-07-01
The peculiar velocity field measured by redshift-space distortions (RSD) in galaxy surveys provides a unique probe of the growth of large-scale structure. However, systematic effects arise when including satellite galaxies in the clustering analysis. Since satellite galaxies tend to reside in massive haloes with a greater halo bias, the inclusion boosts the clustering power. In addition, virial motions of the satellite galaxies cause a significant suppression of the clustering power due to non-linear RSD effects. We develop a novel method to recover the redshift-space power spectrum of haloes from the observed galaxy distribution by minimizing the contamination of satellite galaxies. The cylinder-grouping method (CGM) we study effectively excludes satellite galaxies from a galaxy sample. However, we find that this technique produces apparent anisotropies in the reconstructed halo distribution over all the scales which mimic RSD. On small scales, the apparent anisotropic clustering is caused by exclusion of haloes within the anisotropic cylinder used by the CGM. On large scales, the misidentification of different haloes in the large-scale structures, aligned along the line of sight, into the same CGM group causes the apparent anisotropic clustering via their cross-correlation with the CGM haloes. We construct an empirical model for the CGM halo power spectrum, which includes correction terms derived using the CGM window function at small scales as well as the linear matter power spectrum multiplied by a simple anisotropic function at large scales. We apply this model to a mock galaxy catalogue at z = 0.5, designed to resemble Sloan Digital Sky Survey-III Baryon Oscillation Spectroscopic Survey (BOSS) CMASS galaxies, and find that our model can predict both the monopole and quadrupole power spectra of the host haloes up to k < 0.5 {{h Mpc^{-1}}} to within 5 per cent.
A large scale analysis of genetic variants within putative miRNA binding sites in prostate cancer
Stegeman, Shane; Amankwah, Ernest; Klein, Kerenaftali; O’Mara, Tracy A.; Kim, Donghwa; Lin, Hui-Yi; Permuth-Wey, Jennifer; Sellers, Thomas A.; Srinivasan, Srilakshmi; Eeles, Rosalind; Easton, Doug; Kote-Jarai, Zsofia; Olama, Ali Amin Al; Benlloch, Sara; Muir, Kenneth; Giles, Graham G.; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher A.; Schleutker, Johanna; Nordestgaard, Børge G.; Travis, Ruth C.; Neal, David; Pharoah, Paul; Khaw, Kay-Tee; Stanford, Janet L.; Blot, William J.; Thibodeau, Stephen; Maier, Christiane; Kibel, Adam S.; Cybulski, Cezary; Cannon-Albright, Lisa; Brenner, Hermann; Kaneva, Radka; Teixeira, Manuel R.; Consortium, PRACTICAL; Spurdle, Amanda B.; Clements, Judith A.; Park, Jong Y.; Batra, Jyotsna
2015-01-01
Prostate cancer is the second most common malignancy among men worldwide. Genome-wide association studies (GWAS) have identified 100 risk variants for prostate cancer, which can explain ~33% of the familial risk of the disease. We hypothesized that a comprehensive analysis of genetic variations found within the 3′ UTR of genes predicted to affect miRNA binding (miRSNPs) can identify additional prostate cancer risk variants. We investigated the association between 2,169 miRSNPs and prostate cancer risk in a large-scale analysis of 22,301 cases and 22,320 controls of European ancestry from 23 participating studies. Twenty-two miRSNPs were associated (p<2.3×10−5) with risk of prostate cancer, 10 of which were within the 7 genes previously not mapped by GWASs. Further, using miRNA mimics and reporter gene assays, we showed that miR-3162-5p has specific affinity for the KLK3 rs1058205 miRSNP T-allele whilst miR-370 has greater affinity for the VAMP8 rs1010 miRSNP A-allele, validating their functional role. Significance Findings from this large association study suggest that a focus on miRSNPs, including functional evaluation, can identify candidate risk loci below currently accepted statistical levels of genome-wide significance. Studies of miRNAs and their interactions with SNPs could provide further insights into the mechanisms of prostate cancer risk. PMID:25691096
NASA Astrophysics Data System (ADS)
Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.
2017-12-01
With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also present some of our findings from applying machine learning and data analytics on the processed SAR data streams. We will also present lessons learned on how to ease the SAR community onto interfacing with these cloud-based SAR science data systems.
Abbott, J Haxby; Schmitt, John
2014-08-01
Multicenter, prospective, longitudinal cohort study. To investigate the minimum important difference (MID) of the Patient-Specific Functional Scale (PSFS), 4 region-specific outcome measures, and the numeric pain rating scale (NPRS) across 3 levels of patient-perceived global rating of change in a clinical setting. The MID varies depending on the external anchor defining patient-perceived "importance." The MID for the PSFS has not been established across all body regions. One thousand seven hundred eight consecutive patients with musculoskeletal disorders were recruited from 5 physical therapy clinics. The PSFS, NPRS, and 4 region-specific outcome measures-the Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale-were assessed at the initial and final physical therapy visits. Global rating of change was assessed at the final visit. MID was calculated for the PSFS and NPRS (overall and for each body region), and for each region-specific outcome measure, across 3 levels of change defined by the global rating of change (small, medium, large change) using receiver operating characteristic curve methodology. The MID for the PSFS (on a scale from 0 to 10) ranged from 1.3 (small change) to 2.3 (medium change) to 2.7 (large change), and was relatively stable across body regions. MIDs for the NPRS (-1.5 to -3.5), Oswestry Disability Index (-12), Neck Disability Index (-14), Upper Extremity Functional Index (6 to 11), and Lower Extremity Functional Scale (9 to 16) are also reported. We reported the MID for small, medium, and large patient-perceived change on the PSFS, NPRS, Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale for use in clinical practice and research.
Differentiating unipolar and bipolar depression by alterations in large-scale brain networks.
Goya-Maldonado, Roberto; Brodmann, Katja; Keil, Maria; Trost, Sarah; Dechent, Peter; Gruber, Oliver
2016-02-01
Misdiagnosing bipolar depression can lead to very deleterious consequences of mistreatment. Although depressive symptoms may be similarly expressed in unipolar and bipolar disorder, changes in specific brain networks could be very distinct, being therefore informative markers for the differential diagnosis. We aimed to characterize specific alterations in candidate large-scale networks (frontoparietal, cingulo-opercular, and default mode) in symptomatic unipolar and bipolar patients using resting state fMRI, a cognitively low demanding paradigm ideal to investigate patients. Networks were selected after independent component analysis, compared across 40 patients acutely depressed (20 unipolar, 20 bipolar), and 20 controls well-matched for age, gender, and education levels, and alterations were correlated to clinical parameters. Despite comparable symptoms, patient groups were robustly differentiated by large-scale network alterations. Differences were driven in bipolar patients by increased functional connectivity in the frontoparietal network, a central executive and externally-oriented network. Conversely, unipolar patients presented increased functional connectivity in the default mode network, an introspective and self-referential network, as much as reduced connectivity of the cingulo-opercular network to default mode regions, a network involved in detecting the need to switch between internally and externally oriented demands. These findings were mostly unaffected by current medication, comorbidity, and structural changes. Moreover, network alterations in unipolar patients were significantly correlated to the number of depressive episodes. Unipolar and bipolar groups displaying similar symptomatology could be clearly distinguished by characteristic changes in large-scale networks, encouraging further investigation of network fingerprints for clinical use. Hum Brain Mapp 37:808-818, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
De Michelis, Paola; Federica Marcucci, Maria; Consolini, Giuseppe
2015-04-01
Recently we have investigated the spatial distribution of the scaling features of short-time scale magnetic field fluctuations using measurements from several ground-based geomagnetic observatories distributed in the northern hemisphere. We have found that the scaling features of fluctuations of the horizontal magnetic field component at time scales below 100 minutes are correlated with the geomagnetic activity level and with changes in the currents flowing in the ionosphere. Here, we present a detailed analysis of the dynamical changes of the magnetic field scaling features as a function of the geomagnetic activity level during the well-known large geomagnetic storm occurred on July, 15, 2000 (the Bastille event). The observed dynamical changes are discussed in relationship with the changes of the overall ionospheric polar convection and potential structure as reconstructed using SuperDARN data. This work is supported by the Italian National Program for Antarctic Research (PNRA) - Research Project 2013/AC3.08 and by the European Community's Seventh Framework Programme ([FP7/2007-2013]) under Grant no. 313038/STORM and
IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application
NASA Astrophysics Data System (ADS)
Gopu, A.; Hayashi, S.; Young, M. D.
2014-05-01
Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.
Insights into Hox protein function from a large scale combinatorial analysis of protein domains.
Merabet, Samir; Litim-Mecheri, Isma; Karlsson, Daniel; Dixit, Richa; Saadaoui, Mehdi; Monier, Bruno; Brun, Christine; Thor, Stefan; Vijayraghavan, K; Perrin, Laurent; Pradel, Jacques; Graba, Yacine
2011-10-01
Protein function is encoded within protein sequence and protein domains. However, how protein domains cooperate within a protein to modulate overall activity and how this impacts functional diversification at the molecular and organism levels remains largely unaddressed. Focusing on three domains of the central class Drosophila Hox transcription factor AbdominalA (AbdA), we used combinatorial domain mutations and most known AbdA developmental functions as biological readouts to investigate how protein domains collectively shape protein activity. The results uncover redundancy, interactivity, and multifunctionality of protein domains as salient features underlying overall AbdA protein activity, providing means to apprehend functional diversity and accounting for the robustness of Hox-controlled developmental programs. Importantly, the results highlight context-dependency in protein domain usage and interaction, allowing major modifications in domains to be tolerated without general functional loss. The non-pleoitropic effect of domain mutation suggests that protein modification may contribute more broadly to molecular changes underlying morphological diversification during evolution, so far thought to rely largely on modification in gene cis-regulatory sequences.
Insights into Hox Protein Function from a Large Scale Combinatorial Analysis of Protein Domains
Karlsson, Daniel; Dixit, Richa; Saadaoui, Mehdi; Monier, Bruno; Brun, Christine; Thor, Stefan; Vijayraghavan, K.; Perrin, Laurent; Pradel, Jacques; Graba, Yacine
2011-01-01
Protein function is encoded within protein sequence and protein domains. However, how protein domains cooperate within a protein to modulate overall activity and how this impacts functional diversification at the molecular and organism levels remains largely unaddressed. Focusing on three domains of the central class Drosophila Hox transcription factor AbdominalA (AbdA), we used combinatorial domain mutations and most known AbdA developmental functions as biological readouts to investigate how protein domains collectively shape protein activity. The results uncover redundancy, interactivity, and multifunctionality of protein domains as salient features underlying overall AbdA protein activity, providing means to apprehend functional diversity and accounting for the robustness of Hox-controlled developmental programs. Importantly, the results highlight context-dependency in protein domain usage and interaction, allowing major modifications in domains to be tolerated without general functional loss. The non-pleoitropic effect of domain mutation suggests that protein modification may contribute more broadly to molecular changes underlying morphological diversification during evolution, so far thought to rely largely on modification in gene cis-regulatory sequences. PMID:22046139
Systems Proteomics for Translational Network Medicine
Arrell, D. Kent; Terzic, Andre
2012-01-01
Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016
Multi-source Geospatial Data Analysis with Google Earth Engine
NASA Astrophysics Data System (ADS)
Erickson, T.
2014-12-01
The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org
A reduced basis method for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Vincent-Finley, Rachel Elisabeth
In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.
Large-Scale Computation of Nuclear Magnetic Resonance Shifts for Paramagnetic Solids Using CP2K.
Mondal, Arobendo; Gaultois, Michael W; Pell, Andrew J; Iannuzzi, Marcella; Grey, Clare P; Hutter, Jürg; Kaupp, Martin
2018-01-09
Large-scale computations of nuclear magnetic resonance (NMR) shifts for extended paramagnetic solids (pNMR) are reported using the highly efficient Gaussian-augmented plane-wave implementation of the CP2K code. Combining hyperfine couplings obtained with hybrid functionals with g-tensors and orbital shieldings computed using gradient-corrected functionals, contact, pseudocontact, and orbital-shift contributions to pNMR shifts are accessible. Due to the efficient and highly parallel performance of CP2K, a wide variety of materials with large unit cells can be studied with extended Gaussian basis sets. Validation of various approaches for the different contributions to pNMR shifts is done first for molecules in a large supercell in comparison with typical quantum-chemical codes. This is then extended to a detailed study of g-tensors for extended solid transition-metal fluorides and for a series of complex lithium vanadium phosphates. Finally, lithium pNMR shifts are computed for Li 3 V 2 (PO 4 ) 3 , for which detailed experimental data are available. This has allowed an in-depth study of different approaches (e.g., full periodic versus incremental cluster computations of g-tensors and different functionals and basis sets for hyperfine computations) as well as a thorough analysis of the different contributions to the pNMR shifts. This study paves the way for a more-widespread computational treatment of NMR shifts for paramagnetic materials.
New convergence results for the scaled gradient projection method
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2015-09-01
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.
An Analysis of Large-Scale Writing Assessments in Canada (Grades 5-8)
ERIC Educational Resources Information Center
Peterson, Shelley Stagg; McClay, Jill; Main, Kristin
2011-01-01
This paper reports on an analysis of large-scale assessments of Grades 5-8 students' writing across 10 provinces and 2 territories in Canada. Theory, classroom practice, and the contributions and constraints of large-scale writing assessment are brought together with a focus on Grades 5-8 writing in order to provide both a broad view of…
Past and present cosmic structure in the SDSS DR7 main sample
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasche, J.; Leclercq, F.; Wandelt, B.D., E-mail: jasche@iap.fr, E-mail: florent.leclercq@polytechnique.org, E-mail: wandelt@iap.fr
2015-01-01
We present a chrono-cosmography project, aiming at the inference of the four dimensional formation history of the observed large scale structure from its origin to the present epoch. To do so, we perform a full-scale Bayesian analysis of the northern galactic cap of the Sloan Digital Sky Survey (SDSS) Data Release 7 main galaxy sample, relying on a fully probabilistic, physical model of the non-linearly evolved density field. Besides inferring initial conditions from observations, our methodology naturally and accurately reconstructs non-linear features at the present epoch, such as walls and filaments, corresponding to high-order correlation functions generated by late-time structuremore » formation. Our inference framework self-consistently accounts for typical observational systematic and statistical uncertainties such as noise, survey geometry and selection effects. We further account for luminosity dependent galaxy biases and automatic noise calibration within a fully Bayesian approach. As a result, this analysis provides highly-detailed and accurate reconstructions of the present density field on scales larger than ∼ 3 Mpc/h, constrained by SDSS observations. This approach also leads to the first quantitative inference of plausible formation histories of the dynamic large scale structure underlying the observed galaxy distribution. The results described in this work constitute the first full Bayesian non-linear analysis of the cosmic large scale structure with the demonstrated capability of uncertainty quantification. Some of these results will be made publicly available along with this work. The level of detail of inferred results and the high degree of control on observational uncertainties pave the path towards high precision chrono-cosmography, the subject of simultaneously studying the dynamics and the morphology of the inhomogeneous Universe.« less
Applications of large-scale density functional theory in biology
NASA Astrophysics Data System (ADS)
Cole, Daniel J.; Hine, Nicholas D. M.
2016-10-01
Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.
ERIC Educational Resources Information Center
Sachse, Karoline A.; Haag, Nicole
2017-01-01
Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…
NASA Technical Reports Server (NTRS)
Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)
2001-01-01
Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.
ERIC Educational Resources Information Center
Shin, Tacksoo
2012-01-01
This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…
NASA Astrophysics Data System (ADS)
Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.
2016-12-01
National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.
Lix, Lisa M; Wu, Xiuyun; Hopman, Wilma; Mayo, Nancy; Sajobi, Tolulope T; Liu, Juxin; Prior, Jerilynn C; Papaioannou, Alexandra; Josse, Robert G; Towheed, Tanveer E; Davison, K Shawn; Sawatzky, Richard
2016-01-01
Self-reported health status measures, like the Short Form 36-item Health Survey (SF-36), can provide rich information about the overall health of a population and its components, such as physical, mental, and social health. However, differential item functioning (DIF), which arises when population sub-groups with the same underlying (i.e., latent) level of health have different measured item response probabilities, may compromise the comparability of these measures. The purpose of this study was to test for DIF on the SF-36 physical functioning (PF) and mental health (MH) sub-scale items in a Canadian population-based sample. Study data were from the prospective Canadian Multicentre Osteoporosis Study (CaMos), which collected baseline data in 1996-1997. DIF was tested using a multiple indicators multiple causes (MIMIC) method. Confirmatory factor analysis defined the latent variable measurement model for the item responses and latent variable regression with demographic and health status covariates (i.e., sex, age group, body weight, self-perceived general health) produced estimates of the magnitude of DIF effects. The CaMos cohort consisted of 9423 respondents; 69.4% were female and 51.7% were less than 65 years. Eight of 10 items on the PF sub-scale and four of five items on the MH sub-scale exhibited DIF. Large DIF effects were observed on PF sub-scale items about vigorous and moderate activities, lifting and carrying groceries, walking one block, and bathing or dressing. On the MH sub-scale items, all DIF effects were small or moderate in size. SF-36 PF and MH sub-scale scores were not comparable across population sub-groups defined by demographic and health status variables due to the effects of DIF, although the magnitude of this bias was not large for most items. We recommend testing and adjusting for DIF to ensure comparability of the SF-36 in population-based investigations.
Kimura, Rie; Saiki, Akiko; Fujiwara-Tsukamoto, Yoko; Sakai, Yutaka; Isomura, Yoshikazu
2017-01-01
There have been few systematic population-wide analyses of relationships between spike synchrony within a period of several milliseconds and behavioural functions. In this study, we obtained a large amount of spike data from > 23,000 neuron pairs by multiple single-unit recording from deep layer neurons in motor cortical areas in rats performing a forelimb movement task. The temporal changes of spike synchrony in the whole neuron pairs were statistically independent of behavioural changes during the task performance, although some neuron pairs exhibited correlated changes in spike synchrony. Mutual information analyses revealed that spike synchrony made a smaller contribution than spike rate to behavioural functions. The strength of spike synchrony between two neurons was statistically independent of the spike rate-based preferences of the pair for behavioural functions. Spike synchrony within a period of several milliseconds in presynaptic neurons enables effective integration of functional information in the postsynaptic neuron. However, few studies have systematically analysed the population-wide relationships between spike synchrony and behavioural functions. Here we obtained a sufficiently large amount of spike data among regular-spiking (putatively excitatory) and fast-spiking (putatively inhibitory) neuron subtypes (> 23,000 pairs) by multiple single-unit recording from deep layers in motor cortical areas (caudal forelimb area, rostral forelimb area) in rats performing a forelimb movement task. After holding a lever, rats pulled the lever either in response to a cue tone (external-trigger trials) or spontaneously without any cue (internal-trigger trials). Many neurons exhibited functional spike activity in association with forelimb movements, and the preference of regular-spiking neurons in the rostral forelimb area was more biased toward externally triggered movement than that in the caudal forelimb area. We found that a population of neuron pairs with spike synchrony does exist, and that some neuron pairs exhibit a dependence on movement phase during task performance. However, the population-wide analysis revealed that spike synchrony was statistically independent of the movement phase and the spike rate-based preferences of the pair for behavioural functions, whereas spike rates were clearly dependent on the movement phase. In fact, mutual information analyses revealed that the contribution of spike synchrony to the behavioural functions was small relative to the contribution of spike rate. Our large-scale analysis revealed that cortical spike rate, rather than spike synchrony, contributes to population coding for movement. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Liu, Huayun; Yu, Juping; Chen, Yongyi; He, Pingping; Zhou, Lianqing; Tang, Xinhui; Liu, Xiangyu; Li, Xuying; Wu, Yanping; Wang, Yuhua
2016-02-01
This study aimed to examine the psychometric properties and performance of a Chinese version of the Female Sexual Function Index (FSFI) among a sample of Chinese women with cervical cancer. A cross-sectional survey design was used. The respondents included 215 women with cervical cancer in an oncology hospital in China. A translated Chinese version of the FSFI was used to investigate their sexual functioning. Psychometric testing included internal consistency reliability (Cronbach's alpha coefficient and item-total correlations), test-retest reliability, construct validity (principal component analysis via oblique rotation and confirmatory factor analysis), and variability (floor and ceiling effects). The mean score of the total scale was 20.65 ± 4.77. The Cronbach values were .94 for the total scale, .72-.90 for the domains. Test-retest correlation coefficients over 2-4 weeks were .84 (p < .05) for the total scale, .68-.83 for the subscales. Item-total correlation coefficients ranged between .47 and .83 (p < .05). A five-factor model was identified via principal component analysis and established by confirmatory factor analysis, including desire/arousal, lubrication, orgasm, satisfaction, and pain. There was no evidence of floor or ceiling effects. With good psychometric properties similar to its original English version, this Chinese version of the FSFI is demonstrated to be a reliable and valid instrument that can be used to assess sexual functioning of women with cervical cancer in China. Future research is still needed to confirm its psychometric properties and performance among a large sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Large-scale expensive black-box function optimization
NASA Astrophysics Data System (ADS)
Rashid, Kashif; Bailey, William; Couët, Benoît
2012-09-01
This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.
Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Irwin, Ryan W.; Tinker, Michael L.
2005-01-01
Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.
Altermatt, Anna; Gaetano, Laura; Magon, Stefano; Häring, Dieter A; Tomic, Davorka; Wuerfel, Jens; Radue, Ernst-Wilhelm; Kappos, Ludwig; Sprenger, Till
2018-05-29
There is a limited correlation between white matter (WM) lesion load as determined by magnetic resonance imaging and disability in multiple sclerosis (MS). The reasons for this so-called clinico-radiological paradox are diverse and may, at least partly, relate to the fact that not just the overall lesion burden, but also the exact anatomical location of lesions predict the severity and type of disability. We aimed at studying the relationship between lesion distribution and disability using a voxel-based lesion probability mapping approach in a very large dataset of MS patients. T2-weighted lesion masks of 2348 relapsing-remitting MS patients were spatially normalized to standard stereotaxic space by non-linear registration. Relations between supratentorial WM lesion locations and disability measures were assessed using a non-parametric ANCOVA (Expanded Disability Status Scale [EDSS]; Multiple Sclerosis Functional Composite, and subscores; Modified Fatigue Impact Scale) or multinomial ordinal logistic regression (EDSS functional subscores). Data from 1907 (81%) patients were included in the analysis because of successful registration. The lesion mapping showed similar areas to be associated with the different disability scales: periventricular regions in temporal, frontal, and limbic lobes were predictive, mainly affecting the posterior thalamic radiation, the anterior, posterior, and superior parts of the corona radiata. In summary, significant associations between lesion location and clinical scores were found in periventricular areas. Such lesion clusters appear to be associated with impairment of different physical and cognitive abilities, probably because they affect commissural and long projection fibers, which are relevant WM pathways supporting many different brain functions.
Scale-space measures for graph topology link protein network architecture to function.
Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen
2014-06-15
The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.
The up-scaling of ecosystem functions in a heterogeneous world
NASA Astrophysics Data System (ADS)
Lohrer, Andrew M.; Thrush, Simon F.; Hewitt, Judi E.; Kraan, Casper
2015-05-01
Earth is in the midst of a biodiversity crisis that is impacting the functioning of ecosystems and the delivery of valued goods and services. However, the implications of large scale species losses are often inferred from small scale ecosystem functioning experiments with little knowledge of how the dominant drivers of functioning shift across scales. Here, by integrating observational and manipulative experimental field data, we reveal scale-dependent influences on primary productivity in shallow marine habitats, thus demonstrating the scalability of complex ecological relationships contributing to coastal marine ecosystem functioning. Positive effects of key consumers (burrowing urchins, Echinocardium cordatum) on seafloor net primary productivity (NPP) elucidated by short-term, single-site experiments persisted across multiple sites and years. Additional experimentation illustrated how these effects amplified over time, resulting in greater primary producer biomass sediment chlorophyll a content (Chla) in the longer term, depending on climatic context and habitat factors affecting the strengths of mutually reinforcing feedbacks. The remarkable coherence of results from small and large scales is evidence of real-world ecosystem function scalability and ecological self-organisation. This discovery provides greater insights into the range of responses to broad-scale anthropogenic stressors in naturally heterogeneous environmental settings.
The up-scaling of ecosystem functions in a heterogeneous world
Lohrer, Andrew M.; Thrush, Simon F.; Hewitt, Judi E.; Kraan, Casper
2015-01-01
Earth is in the midst of a biodiversity crisis that is impacting the functioning of ecosystems and the delivery of valued goods and services. However, the implications of large scale species losses are often inferred from small scale ecosystem functioning experiments with little knowledge of how the dominant drivers of functioning shift across scales. Here, by integrating observational and manipulative experimental field data, we reveal scale-dependent influences on primary productivity in shallow marine habitats, thus demonstrating the scalability of complex ecological relationships contributing to coastal marine ecosystem functioning. Positive effects of key consumers (burrowing urchins, Echinocardium cordatum) on seafloor net primary productivity (NPP) elucidated by short-term, single-site experiments persisted across multiple sites and years. Additional experimentation illustrated how these effects amplified over time, resulting in greater primary producer biomass sediment chlorophyll a content (Chla) in the longer term, depending on climatic context and habitat factors affecting the strengths of mutually reinforcing feedbacks. The remarkable coherence of results from small and large scales is evidence of real-world ecosystem function scalability and ecological self-organisation. This discovery provides greater insights into the range of responses to broad-scale anthropogenic stressors in naturally heterogeneous environmental settings. PMID:25993477
Manor, Ohad; Borenstein, Elhanan
2017-02-08
Comparative analyses of the human microbiome have identified both taxonomic and functional shifts that are associated with numerous diseases. To date, however, microbiome taxonomy and function have mostly been studied independently and the taxonomic drivers of functional imbalances have not been systematically identified. Here, we present FishTaco, an analytical and computational framework that integrates taxonomic and functional comparative analyses to accurately quantify taxon-level contributions to disease-associated functional shifts. Applying FishTaco to several large-scale metagenomic cohorts, we show that shifts in the microbiome's functional capacity can be traced back to specific taxa. Furthermore, the set of taxa driving functional shifts and their contribution levels vary markedly between functions. We additionally find that similar functional imbalances in different diseases are driven by both disease-specific and shared taxa. Such integrated analysis of microbiome ecological and functional dynamics can inform future microbiome-based therapy, pinpointing putative intervention targets for manipulating the microbiome's functional capacity. Copyright © 2017 Elsevier Inc. All rights reserved.
Multi-scale chromatin state annotation using a hierarchical hidden Markov model
NASA Astrophysics Data System (ADS)
Marco, Eugenio; Meuleman, Wouter; Huang, Jialiang; Glass, Kimberly; Pinello, Luca; Wang, Jianrong; Kellis, Manolis; Yuan, Guo-Cheng
2017-04-01
Chromatin-state analysis is widely applied in the studies of development and diseases. However, existing methods operate at a single length scale, and therefore cannot distinguish large domains from isolated elements of the same type. To overcome this limitation, we present a hierarchical hidden Markov model, diHMM, to systematically annotate chromatin states at multiple length scales. We apply diHMM to analyse a public ChIP-seq data set. diHMM not only accurately captures nucleosome-level information, but identifies domain-level states that vary in nucleosome-level state composition, spatial distribution and functionality. The domain-level states recapitulate known patterns such as super-enhancers, bivalent promoters and Polycomb repressed regions, and identify additional patterns whose biological functions are not yet characterized. By integrating chromatin-state information with gene expression and Hi-C data, we identify context-dependent functions of nucleosome-level states. Thus, diHMM provides a powerful tool for investigating the role of higher-order chromatin structure in gene regulation.
The relative efficiency of modular and non-modular networks of different size
Tosh, Colin R.; McNally, Luke
2015-01-01
Most biological networks are modular but previous work with small model networks has indicated that modularity does not necessarily lead to increased functional efficiency. Most biological networks are large, however, and here we examine the relative functional efficiency of modular and non-modular neural networks at a range of sizes. We conduct a detailed analysis of efficiency in networks of two size classes: ‘small’ and ‘large’, and a less detailed analysis across a range of network sizes. The former analysis reveals that while the modular network is less efficient than one of the two non-modular networks considered when networks are small, it is usually equally or more efficient than both non-modular networks when networks are large. The latter analysis shows that in networks of small to intermediate size, modular networks are much more efficient that non-modular networks of the same (low) connective density. If connective density must be kept low to reduce energy needs for example, this could promote modularity. We have shown how relative functionality/performance scales with network size, but the precise nature of evolutionary relationship between network size and prevalence of modularity will depend on the costs of connectivity. PMID:25631996
Volatility return intervals analysis of the Japanese market
NASA Astrophysics Data System (ADS)
Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.
2008-03-01
We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, M.J.; Bourke, W.; Browning, G.L.
The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Huffman, Allan W.; Lux, Kevin M.; Charney, Joseph J.; Riordan, Allan J.; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)
2002-01-01
A 44 case study analysis of the large-scale atmospheric structure associated with development of accident-producing aircraft turbulence is described. Categorization is a function of the accident location, altitude, time of year, time of day, and the turbulence category, which classifies disturbances. National Centers for Environmental Prediction Reanalyses data sets and satellite imagery are employed to diagnose synoptic scale predictor fields associated with the large-scale environment preceding severe turbulence. These analyses indicate a predominance of severe accident-producing turbulence within the entrance region of a jet stream at the synoptic scale. Typically, a flow curvature region is just upstream within the jet entrance region, convection is within 100 km of the accident, vertical motion is upward, absolute vorticity is low, vertical wind shear is increasing, and horizontal cold advection is substantial. The most consistent predictor is upstream flow curvature and nearby convection is the second most frequent predictor.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
Huang, Zhenzhen; Duan, Huilong; Li, Haomin
2015-01-01
Large-scale human cancer genomics projects, such as TCGA, generated large genomics data for further study. Exploring and mining these data to obtain meaningful analysis results can help researchers find potential genomics alterations that intervene the development and metastasis of tumors. We developed a web-based gene analysis platform, named TCGA4U, which used statistics methods and models to help translational investigators explore, mine and visualize human cancer genomic characteristic information from the TCGA datasets. Furthermore, through Gene Ontology (GO) annotation and clinical data integration, the genomic data were transformed into biological process, molecular function, cellular component and survival curves to help researchers identify potential driver genes. Clinical researchers without expertise in data analysis will benefit from such a user-friendly genomic analysis platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit
2017-09-12
Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less
A meta-analysis of zooplankton functional traits influencing ecosystem function.
Hébert, Marie-Pier; Beisner, Beatrix E; Maranger, Roxane
2016-04-01
The use of functional traits to characterize community composition has been proposed as a more effective way to link community structure to ecosystem functioning. Organismal morphology, body stoichiometry, and physiology can be readily linked to large-scale ecosystem processes through functional traits that inform on interspecific and species-environment interactions; yet such effect traits are still poorly included in trait-based approaches. Given their key trophic position in aquatic ecosystems, individual zooplankton affect energy fluxes and elemental processing. We compiled a large database of zooplankton traits contributing to carbon, nitrogen, and phosphorus cycling and examined the effect of classification and habitat (marine vs. freshwater) on trait relationships. Respiration and nutrient excretion rates followed mass-dependent scaling in both habitats, with exponents ranging from 0.70 to 0.90. Our analyses revealed surprising differences in allometry and respiration between habitats, with freshwater species having lower length-specific mass and three times higher mass-specific respiration rates. These differences in traits point to implications for ecological strategies as well as overall carbon storage and fluxes based on habitat type. Our synthesis quantifies multiple trait relationships and links organisms to ecosystem processes they influence, enabling a more complete integration of aquatic community ecology and biogeochemistry through the promising use of effect traits.
Large-scale data integration framework provides a comprehensive view on glioblastoma multiforme.
Ovaska, Kristian; Laakso, Marko; Haapa-Paananen, Saija; Louhimo, Riku; Chen, Ping; Aittomäki, Viljami; Valo, Erkka; Núñez-Fontarnau, Javier; Rantanen, Ville; Karinen, Sirkku; Nousiainen, Kari; Lahesmaa-Korpinen, Anna-Maria; Miettinen, Minna; Saarinen, Lilli; Kohonen, Pekka; Wu, Jianmin; Westermarck, Jukka; Hautaniemi, Sampsa
2010-09-07
Coordinated efforts to collect large-scale data sets provide a basis for systems level understanding of complex diseases. In order to translate these fragmented and heterogeneous data sets into knowledge and medical benefits, advanced computational methods for data analysis, integration and visualization are needed. We introduce a novel data integration framework, Anduril, for translating fragmented large-scale data into testable predictions. The Anduril framework allows rapid integration of heterogeneous data with state-of-the-art computational methods and existing knowledge in bio-databases. Anduril automatically generates thorough summary reports and a website that shows the most relevant features of each gene at a glance, allows sorting of data based on different parameters, and provides direct links to more detailed data on genes, transcripts or genomic regions. Anduril is open-source; all methods and documentation are freely available. We have integrated multidimensional molecular and clinical data from 338 subjects having glioblastoma multiforme, one of the deadliest and most poorly understood cancers, using Anduril. The central objective of our approach is to identify genetic loci and genes that have significant survival effect. Our results suggest several novel genetic alterations linked to glioblastoma multiforme progression and, more specifically, reveal Moesin as a novel glioblastoma multiforme-associated gene that has a strong survival effect and whose depletion in vitro significantly inhibited cell proliferation. All analysis results are available as a comprehensive website. Our results demonstrate that integrated analysis and visualization of multidimensional and heterogeneous data by Anduril enables drawing conclusions on functional consequences of large-scale molecular data. Many of the identified genetic loci and genes having significant survival effect have not been reported earlier in the context of glioblastoma multiforme. Thus, in addition to generally applicable novel methodology, our results provide several glioblastoma multiforme candidate genes for further studies.Anduril is available at http://csbi.ltdk.helsinki.fi/anduril/The glioblastoma multiforme analysis results are available at http://csbi.ltdk.helsinki.fi/anduril/tcga-gbm/
Remote visualization and scale analysis of large turbulence datatsets
NASA Astrophysics Data System (ADS)
Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.
2015-12-01
Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.
Intermediate-scale plasma irregularities in the polar ionosphere inferred from GPS radio occultation
NASA Astrophysics Data System (ADS)
Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O.; Butala, M. D.; Mannucci, A. J.
2015-02-01
We report intermediate-scale plasma irregularities in the polar ionosphere inferred from high-resolution radio occultation (RO) measurements using GPS (Global Positioning System) to CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) satellite radio links. The high inclination of CASSIOPE and the high rate of signal reception by the GPS Attitude, Positioning, and Profiling RO receiver on CASSIOPE enable a high-resolution investigation of the dynamics of the polar ionosphere with unprecedented detail. Intermediate-scale, scintillation-producing irregularities, which correspond to 1 to 40 km scales, were inferred by applying multiscale spectral analysis on the RO phase measurements. Using our multiscale spectral analysis approach and satellite data (Polar Operational Environmental Satellites and Defense Meteorological Satellite Program), we discovered that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap. We found that large length scales and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap implying that the irregularity scales and phase scintillation characteristics are a function of the solar wind and magnetospheric forcings.
2014-01-01
Background The rice interactome, in which a network of protein-protein interactions has been elucidated in rice, is a useful resource to identify functional modules of rice signal transduction pathways. Protein-protein interactions occur in cells in two ways, constitutive and regulative. While a yeast-based high-throughput method has been widely used to identify the constitutive interactions, a method to detect the regulated interactions is rarely developed for a large-scale analysis. Results A split luciferase complementation assay was applied to detect the regulated interactions in rice. A transformation method of rice protoplasts in a 96-well plate was first established for a large-scale analysis. In addition, an antibody that specifically recognizes a carboxyl-terminal fragment of Renilla luciferase was newly developed. A pair of antibodies that recognize amino- and carboxyl- terminal fragments of Renilla luciferase, respectively, was then used to monitor quality and quantity of interacting recombinant-proteins accumulated in the cells. For a proof-of-concept, the method was applied to detect the gibberellin-dependent interaction between GIBBERELLIN INSENSITIVE DWARF1 and SLENDER RICE 1. Conclusions A method to detect regulated protein-protein interactions was developed towards establishment of the rice interactome. PMID:24987490
Wild, Philipp S.; Felix, Janine F.; Schillert, Arne; Chen, Ming-Huei; Leening, Maarten J.G.; Völker, Uwe; Großmann, Vera; Brody, Jennifer A.; Irvin, Marguerite R.; Shah, Sanjiv J.; Pramana, Setia; Lieb, Wolfgang; Schmidt, Reinhold; Stanton, Alice V.; Malzahn, Dörthe; Lyytikäinen, Leo-Pekka; Tiller, Daniel; Smith, J. Gustav; Di Tullio, Marco R.; Musani, Solomon K.; Morrison, Alanna C.; Pers, Tune H.; Morley, Michael; Kleber, Marcus E.; Aragam, Jayashri; Bis, Joshua C.; Bisping, Egbert; Broeckel, Ulrich; Cheng, Susan; Deckers, Jaap W.; Del Greco M, Fabiola; Edelmann, Frank; Fornage, Myriam; Franke, Lude; Friedrich, Nele; Harris, Tamara B.; Hofer, Edith; Hofman, Albert; Huang, Jie; Hughes, Alun D.; Kähönen, Mika; investigators, KNHI; Kruppa, Jochen; Lackner, Karl J.; Lannfelt, Lars; Laskowski, Rafael; Launer, Lenore J.; Lindgren, Cecilia M.; Loley, Christina; Mayet, Jamil; Medenwald, Daniel; Morris, Andrew P.; Müller, Christian; Müller-Nurasyid, Martina; Nappo, Stefania; Nilsson, Peter M.; Nuding, Sebastian; Nutile, Teresa; Peters, Annette; Pfeufer, Arne; Pietzner, Diana; Pramstaller, Peter P.; Raitakari, Olli T.; Rice, Kenneth M.; Rotter, Jerome I.; Ruohonen, Saku T.; Sacco, Ralph L.; Samdarshi, Tandaw E.; Sharp, Andrew S.P.; Shields, Denis C.; Sorice, Rossella; Sotoodehnia, Nona; Stricker, Bruno H.; Surendran, Praveen; Töglhofer, Anna M.; Uitterlinden, André G.; Völzke, Henry; Ziegler, Andreas; Münzel, Thomas; März, Winfried; Cappola, Thomas P.; Hirschhorn, Joel N.; Mitchell, Gary F.; Smith, Nicholas L.; Fox, Ervin R.; Dueker, Nicole D.; Jaddoe, Vincent W.V.; Melander, Olle; Lehtimäki, Terho; Ciullo, Marina; Hicks, Andrew A.; Lind, Lars; Gudnason, Vilmundur; Pieske, Burkert; Barron, Anthony J.; Zweiker, Robert; Schunkert, Heribert; Ingelsson, Erik; Liu, Kiang; Arnett, Donna K.; Psaty, Bruce M.; Blankenberg, Stefan; Larson, Martin G.; Felix, Stephan B.; Franco, Oscar H.; Zeller, Tanja; Vasan, Ramachandran S.; Dörr, Marcus
2017-01-01
BACKGROUND. Understanding the genetic architecture of cardiac structure and function may help to prevent and treat heart disease. This investigation sought to identify common genetic variations associated with inter-individual variability in cardiac structure and function. METHODS. A GWAS meta-analysis of echocardiographic traits was performed, including 46,533 individuals from 30 studies (EchoGen consortium). The analysis included 16 traits of left ventricular (LV) structure, and systolic and diastolic function. RESULTS. The discovery analysis included 21 cohorts for structural and systolic function traits (n = 32,212) and 17 cohorts for diastolic function traits (n = 21,852). Replication was performed in 5 cohorts (n = 14,321) and 6 cohorts (n = 16,308), respectively. Besides 5 previously reported loci, the combined meta-analysis identified 10 additional genome-wide significant SNPs: rs12541595 near MTSS1 and rs10774625 in ATXN2 for LV end-diastolic internal dimension; rs806322 near KCNRG, rs4765663 in CACNA1C, rs6702619 near PALMD, rs7127129 in TMEM16A, rs11207426 near FGGY, rs17608766 in GOSR2, and rs17696696 in CFDP1 for aortic root diameter; and rs12440869 in IQCH for Doppler transmitral A-wave peak velocity. Findings were in part validated in other cohorts and in GWAS of related disease traits. The genetic loci showed associations with putative signaling pathways, and with gene expression in whole blood, monocytes, and myocardial tissue. CONCLUSION. The additional genetic loci identified in this large meta-analysis of cardiac structure and function provide insights into the underlying genetic architecture of cardiac structure and warrant follow-up in future functional studies. FUNDING. For detailed information per study, see Acknowledgments. PMID:28394258
Generalization and capacity of extensively large two-layered perceptrons.
Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido
2002-09-01
The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.
Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu
2016-01-01
Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (<1 Hz). However, it is difficult to determine the frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities. © 2015 Wiley Periodicals, Inc.
Aeration costs in stirred-tank and bubble column bioreactors
Humbird, D.; Davis, R.; McMillan, J. D.
2017-08-10
To overcome knowledge gaps in the economics of large-scale aeration for production of commodity products, Aspen Plus is used to simulate steady-state oxygen delivery in both stirred-tank and bubble column bioreactors, using published engineering correlations for oxygen mass transfer as a function of aeration rate and power input, coupled with new equipment cost estimates developed in Aspen Capital Cost Estimator and validated against vendor quotations. Here, these simulations describe the cost efficiency of oxygen delivery as a function of oxygen uptake rate and vessel size, and show that capital and operating costs for oxygen delivery drop considerably moving from standard-sizemore » (200 m 3) to world-class size (500 m 3) reactors, but only marginally in further scaling up to hypothetically large (1000 m 3) reactors. Finally, this analysis suggests bubble-column reactor systems can reduce overall costs for oxygen delivery by 10-20% relative to stirred tanks at low to moderate oxygen transfer rates up to 150 mmol/L-h.« less
Aeration costs in stirred-tank and bubble column bioreactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbird, D.; Davis, R.; McMillan, J. D.
To overcome knowledge gaps in the economics of large-scale aeration for production of commodity products, Aspen Plus is used to simulate steady-state oxygen delivery in both stirred-tank and bubble column bioreactors, using published engineering correlations for oxygen mass transfer as a function of aeration rate and power input, coupled with new equipment cost estimates developed in Aspen Capital Cost Estimator and validated against vendor quotations. Here, these simulations describe the cost efficiency of oxygen delivery as a function of oxygen uptake rate and vessel size, and show that capital and operating costs for oxygen delivery drop considerably moving from standard-sizemore » (200 m 3) to world-class size (500 m 3) reactors, but only marginally in further scaling up to hypothetically large (1000 m 3) reactors. Finally, this analysis suggests bubble-column reactor systems can reduce overall costs for oxygen delivery by 10-20% relative to stirred tanks at low to moderate oxygen transfer rates up to 150 mmol/L-h.« less
Functional Interaction Network Construction and Analysis for Disease Discovery.
Wu, Guanming; Haw, Robin
2017-01-01
Network-based approaches project seemingly unrelated genes or proteins onto a large-scale network context, therefore providing a holistic visualization and analysis platform for genomic data generated from high-throughput experiments, reducing the dimensionality of data via using network modules and increasing the statistic analysis power. Based on the Reactome database, the most popular and comprehensive open-source biological pathway knowledgebase, we have developed a highly reliable protein functional interaction network covering around 60 % of total human genes and an app called ReactomeFIViz for Cytoscape, the most popular biological network visualization and analysis platform. In this chapter, we describe the detailed procedures on how this functional interaction network is constructed by integrating multiple external data sources, extracting functional interactions from human curated pathway databases, building a machine learning classifier called a Naïve Bayesian Classifier, predicting interactions based on the trained Naïve Bayesian Classifier, and finally constructing the functional interaction database. We also provide an example on how to use ReactomeFIViz for performing network-based data analysis for a list of genes.
Wang, Y.S.; Miller, D.R.; Anderson, D.E.; Cionco, R.M.; Lin, J.D.
1992-01-01
Turbulent flow within and above an almond orchard was measured with three-dimensional wind sensors and fine-wire thermocouple sensors arranged in a horizontal array. The data showed organized turbulent structures as indicated by coherent asymmetric ramp patterns in the time series traces across the sensor array. Space-time correlation analysis indicated that velocity and temperature fluctuations were significantly correlated over a transverse distance more than 4m. Integral length scales of velocity and temperature fluctuations were substantially greater in unstable conditions than those in stable conditions. The coherence spectral analysis indicated that Davenport's geometric similarity hypothesis was satisfied in the lower frequency region. From the geometric similarity hypothesis, the spatial extents of large ramp structures were also estimated with the coherence functions.
Explorative Function in Williams Syndrome Analyzed through a Large-Scale Task with Multiple Rewards
ERIC Educational Resources Information Center
Foti, F.; Petrosini, L.; Cutuli, D.; Menghini, D.; Chiarotti, F.; Vicari, S.; Mandolesi, L.
2011-01-01
This study aimed to evaluate spatial function in subjects with Williams syndrome (WS) by using a large-scale task with multiple rewards and comparing the spatial abilities of WS subjects with those of mental age-matched control children. In the present spatial task, WS participants had to explore an open space to search nine rewards placed in…
Large-scale transport across narrow gaps in rod bundles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guellouz, M.S.; Tavoularis, S.
1995-09-01
Flow visualization and how-wire anemometry were used to investigate the velocity field in a rectangular channel containing a single cylindrical rod, which could be traversed on the centreplane to form gaps of different widths with the plane wall. The presence of large-scale, quasi-periodic structures in the vicinity of the gap has been demonstrated through flow visualization, spectral analysis and space-time correlation measurements. These structures are seen to exist even for relatively large gaps, at least up to W/D=1.350 (W is the sum of the rod diameter, D, and the gap width). The above measurements appear to compatible with the fieldmore » of a street of three-dimensional, counter-rotating vortices, whose detailed structure, however, remains to be determined. The convection speed and the streamwise spacing of these vortices have been determined as functions of the gap size.« less
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
Using the Saccharomyces Genome Database (SGD) for analysis of genomic information
Skrzypek, Marek S.; Hirschman, Jodi
2011-01-01
Analysis of genomic data requires access to software tools that place the sequence-derived information in the context of biology. The Saccharomyces Genome Database (SGD) integrates functional information about budding yeast genes and their products with a set of analysis tools that facilitate exploring their biological details. This unit describes how the various types of functional data available at SGD can be searched, retrieved, and analyzed. Starting with the guided tour of the SGD Home page and Locus Summary page, this unit highlights how to retrieve data using YeastMine, how to visualize genomic information with GBrowse, how to explore gene expression patterns with SPELL, and how to use Gene Ontology tools to characterize large-scale datasets. PMID:21901739
A rapid local singularity analysis algorithm with applications
NASA Astrophysics Data System (ADS)
Chen, Zhijun; Cheng, Qiuming; Agterberg, Frits
2015-04-01
The local singularity model developed by Cheng is fast gaining popularity in characterizing mineralization and detecting anomalies of geochemical, geophysical and remote sensing data. However in one of the conventional algorithms involving the moving average values with different scales is time-consuming especially while analyzing a large dataset. Summed area table (SAT), also called as integral image, is a fast algorithm used within the Viola-Jones object detection framework in computer vision area. Historically, the principle of SAT is well-known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. We introduce SAT and it's variation Rotated Summed Area Table in the isotropic, anisotropic or directional local singularity mapping in this study. Once computed using SAT, any one of the rectangular sum can be computed at any scale or location in constant time. The area for any rectangular region in the image can be computed by using only 4 array accesses in constant time independently of the size of the region; effectively reducing the time complexity from O(n) to O(1). New programs using Python, Julia, matlab and C++ are implemented respectively to satisfy different applications, especially to the big data analysis. Several large geochemical and remote sensing datasets are tested. A wide variety of scale changes (linear spacing or log spacing) for non-iterative or iterative approach are adopted to calculate the singularity index values and compare the results. The results indicate that the local singularity analysis with SAT is more robust and superior to traditional approach in identifying anomalies.
NASA Technical Reports Server (NTRS)
Blumenthal, George R.; Johnston, Kathryn V.
1994-01-01
The Sachs-Wolfe effect is known to produce large angular scale fluctuations in the cosmic microwave background radiation (CMBR) due to gravitational potential fluctuations. We show how the angular correlation function of the CMBR can be expressed explicitly in terms of the mass autocorrelation function xi(r) in the universe. We derive analytic expressions for the angular correlation function and its multipole moments in terms of integrals over xi(r) or its second moment, J(sub 3)(r), which does not need to satisfy the sort of integral constraint that xi(r) must. We derive similar expressions for bulk flow velocity in terms of xi and J(sub 3). One interesting result that emerges directly from this analysis is that, for all angles theta, there is a substantial contribution to the correlation function from a wide range of distance r and that radial shape of this contribution does not vary greatly with angle.
Ogawa, Takeshi; Aihara, Takatsugu; Shimokawa, Takeaki; Yamashita, Okito
2018-04-24
Creative insight occurs with an "Aha!" experience when solving a difficult problem. Here, we investigated large-scale networks associated with insight problem solving. We recruited 232 healthy participants aged 21-69 years old. Participants completed a magnetic resonance imaging study (MRI; structural imaging and a 10 min resting-state functional MRI) and an insight test battery (ITB) consisting of written questionnaires (matchstick arithmetic task, remote associates test, and insight problem solving task). To identify the resting-state functional connectivity (RSFC) associated with individual creative insight, we conducted an exploratory voxel-based morphometry (VBM)-constrained RSFC analysis. We identified positive correlations between ITB score and grey matter volume (GMV) in the right insula and middle cingulate cortex/precuneus, and a negative correlation between ITB score and GMV in the left cerebellum crus 1 and right supplementary motor area. We applied seed-based RSFC analysis to whole brain voxels using the seeds obtained from the VBM and identified insight-positive/negative connections, i.e. a positive/negative correlation between the ITB score and individual RSFCs between two brain regions. Insight-specific connections included motor-related regions whereas creative-common connections included a default mode network. Our results indicate that creative insight requires a coupling of multiple networks, such as the default mode, semantic and cerebral-cerebellum networks.
Predictors of health-related quality of life in patients with chronic liver disease.
Afendy, A; Kallman, J B; Stepanova, M; Younoszai, Z; Aquino, R D; Bianchi, G; Marchesini, G; Younossi, Z M
2009-09-01
Patient-reported outcomes like health-related quality of life (HRQL) have become increasingly important for full assessment of patients with chronic liver diseases (CLD). To explore the relative impact of different types of liver disease on HRQL as well as predictors of HRQL domains in CLD. Our HRQL databases with Short-Form 36 (SF-36) data were used. Scores for each of SF-36 scales (PF - physical functioning, RP - role functioning, BP - bodily pain, GH - general health, VT - vitality, SF - social functioning, RE - role emotional and MH - mental health, MCS - mental component score, PCS - physical component score) were compared between different types of CLD as well as other variables. Complete data were available for 1103 CLD patients. Demographic and clinical data included: age 54.2 +/- 12.0 years, 40% female, 761 (69%) with cirrhosis. Analysis revealed that age correlated significantly (P < 0.05) with worsening HRQL on every scale of the SF-36. Female patients had more HRQL impairments in PF, RP, BP, GH, VT and MH scales of SF-36 (Delta scale score: 6.6-10.7, P < 0.05). Furthermore, cirrhotic patients had more impairment of HRQL in every scale of SF-36 (Delta scale score: 6.6-43.0, P < 0.05). In terms of diagnostic groups, non-alcoholic fatty liver disease patients showed more impairment of HRQL. Analysis of this large CLD cohort suggests that a number of important clinicodemographic factors are associated with HRQL impairment. These findings contribute to the full understanding of the total impact of CLD on patients' health.
Lagrangian space consistency relation for large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Bart; Hui, Lam; Xiao, Xiao
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Lagrangian space consistency relation for large scale structure
Horn, Bart; Hui, Lam; Xiao, Xiao
2015-09-29
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Norris, Scott A; Brenner, Michael P; Aziz, Michael J
2009-06-03
We develop a methodology for deriving continuum partial differential equations for the evolution of large-scale surface morphology directly from molecular dynamics simulations of the craters formed from individual ion impacts. Our formalism relies on the separation between the length scale of ion impact and the characteristic scale of pattern formation, and expresses the surface evolution in terms of the moments of the crater function. We demonstrate that the formalism reproduces the classical Bradley-Harper results, as well as ballistic atomic drift, under the appropriate simplifying assumptions. Given an actual set of converged molecular dynamics moments and their derivatives with respect to the incidence angle, our approach can be applied directly to predict the presence and absence of surface morphological instabilities. This analysis represents the first work systematically connecting molecular dynamics simulations of ion bombardment to partial differential equations that govern topographic pattern-forming instabilities.
Relative importance of local- and large-scale drivers of alpine soil microarthropod communities.
Mitchell, Ruth J; Urpeth, Hannah M; Britton, Andrea J; Black, Helaina; Taylor, Astrid R
2016-11-01
Nitrogen (N) deposition and climate are acknowledged drivers of change in biodiversity and ecosystem function at large scales. However, at a local scale, their impact on functions and community structure of organisms is filtered by drivers like habitat quality and food quality/availability. This study assesses the relative impact of large-scale factors, N deposition and climate (rainfall and temperature), versus local-scale factors of habitat quality and food quality/availability on soil fauna communities at 15 alpine moss-sedge heaths along an N deposition gradient in the UK. Habitat quality and food quality/availability were the primary drivers of microarthropod communities. No direct impacts of N deposition on the microarthropod community were observed, but induced changes in habitat quality (decline in moss cover and depth) and food quality (decreased vegetation C:N) associated with increased N deposition strongly suggest an indirect impact of N. Habitat quality and climate explained variation in the composition of the Oribatida, Mesostigmata, and Collembola communities, while only habitat quality significantly impacted the Prostigmata. Food quality and prey availability were important in explaining the composition of the oribatid and mesostigmatid mite communities, respectively. This study shows that, in alpine habitats, soil microarthropod community structure responds most strongly to local-scale variation in habitat quality and food availability rather than large-scale variation in climate and pollution. However, given the strong links between N deposition and the key habitat quality parameters, we conclude that N deposition indirectly drives changes in the soil microarthropod community, suggesting a mechanism by which large-scale drivers indirectly impacts these functionally important groups.
Parks, T. P.; Quist, Michael C.; Pierce, C.L.
2016-01-01
Nonwadeable rivers are unique ecosystems that support high levels of aquatic biodiversity, yet they have been greatly altered by human activities. Although riverine fish assemblages have been studied in the past, we still have an incomplete understanding of how fish assemblages respond to both natural and anthropogenic influences in large rivers. The purpose of this study was to evaluate associations between fish assemblage structure and reach-scale habitat, dam, and watershed land use characteristics. In the summers of 2011 and 2012, comprehensive fish and environmental data were collected from 33 reaches in the Iowa and Cedar rivers of eastern-central Iowa. Canonical correspondence analysis (CCA) was used to evaluate environmental relationships with species relative abundance, functional trait abundance (e.g. catch rate of tolerant species), and functional trait composition (e.g. percentage of tolerant species). On the basis of partial CCAs, reach-scale habitat, dam characteristics, and watershed land use features explained 25.0–81.1%, 6.2–25.1%, and 5.8–47.2% of fish assemblage variation, respectively. Although reach-scale, dam, and land use factors contributed to overall assemblage structure, the majority of fish assemblage variation was constrained by reach-scale habitat factors. Specifically, mean annual discharge was consistently selected in nine of the 11 CCA models and accounted for the majority of explained fish assemblage variance by reach-scale habitat. This study provides important insight on the influence of anthropogenic disturbances across multiple spatial scales on fish assemblages in large river systems.
Individual differences and time-varying features of modular brain architecture.
Liao, Xuhong; Cao, Miao; Xia, Mingrui; He, Yong
2017-05-15
Recent studies have suggested that human brain functional networks are topologically organized into functionally specialized but inter-connected modules to facilitate efficient information processing and highly flexible cognitive function. However, these studies have mainly focused on group-level network modularity analyses using "static" functional connectivity approaches. How these extraordinary modular brain structures vary across individuals and spontaneously reconfigure over time remain largely unknown. Here, we employed multiband resting-state functional MRI data (N=105) from the Human Connectome Project and a graph-based modularity analysis to systematically investigate individual variability and dynamic properties in modular brain networks. We showed that the modular structures of brain networks dramatically vary across individuals, with higher modular variability primarily in the association cortex (e.g., fronto-parietal and attention systems) and lower variability in the primary systems. Moreover, brain regions spontaneously changed their module affiliations on a temporal scale of seconds, which cannot be simply attributable to head motion and sampling error. Interestingly, the spatial pattern of intra-subject dynamic modular variability largely overlapped with that of inter-subject modular variability, both of which were highly reproducible across repeated scanning sessions. Finally, the regions with remarkable individual/temporal modular variability were closely associated with network connectors and the number of cognitive components, suggesting a potential contribution to information integration and flexible cognitive function. Collectively, our findings highlight individual modular variability and the notable dynamic characteristics in large-scale brain networks, which enhance our understanding of the neural substrates underlying individual differences in a variety of cognition and behaviors. Copyright © 2017 Elsevier Inc. All rights reserved.
Vaccarino, Anthony L; Anonymous; Anderson, Karen E.; Borowsky, Beth; Coccaro, Emil; Craufurd, David; Endicott, Jean; Giuliano, Joseph; Groves, Mark; Guttman, Mark; Ho, Aileen K; Kupchak, Peter; Paulsen, Jane S.; Stanford, Matthew S.; van Kammen, Daniel P; Watson, David; Wu, Kevin D; Evans, Ken
2011-01-01
The Functional Rating Scale Taskforce for pre-Huntington Disease (FuRST-pHD) is a multinational, multidisciplinary initiative with the goal of developing a data-driven, comprehensive, psychometrically sound, rating scale for assessing symptoms and functional ability in prodromal and early Huntington disease (HD) gene expansion carriers. The process involves input from numerous sources to identify relevant symptom domains, including HD individuals, caregivers, and experts from a variety of fields, as well as knowledge gained from the analysis of data from ongoing large-scale studies in HD using existing clinical scales. This is an iterative process in which an ongoing series of field tests in prodromal (prHD) and early HD individuals provides the team with data on which to make decisions regarding which questions should undergo further development or testing and which should be excluded. We report here the development and assessment of the first iteration of interview questions aimed to assess "Anger and Irritability" and "Obsessions and Compulsions" in prHD individuals. PMID:21826116
NASA Astrophysics Data System (ADS)
Sun, P.; Jokipii, J. R.; Giacalone, J.
2016-12-01
Anisotropies in astrophysical turbulence has been proposed and observed for a long time. And recent observations adopting the multi-scale analysis techniques provided a detailed description of the scale-dependent power spectrum of the magnetic field parallel and perpendicular to the scale-dependent magnetic field line at different scales in the solar wind. In the previous work, we proposed a multi-scale method to synthesize non-isotropic turbulent magnetic field with pre-determined power spectra of the fluctuating magnetic field as a function of scales. We present the effect of test particle transport in the resulting field with a two-scale algorithm. We find that the scale-dependent turbulence anisotropy has a significant difference in the effect on charged par- ticle transport from what the isotropy or the global anisotropy has. It is important to apply this field synthesis method to the solar wind magnetic field based on spacecraft data. However, this relies on how we extract the power spectra of the turbulent magnetic field across different scales. In this study, we propose here a power spectrum synthesis method based on Fourier analysis to extract the large and small scale power spectrum from a single spacecraft observation with a long enough period and a high sampling frequency. We apply the method to the solar wind measurement by the magnetometer onboard the ACE spacecraft and regenerate the large scale isotropic 2D spectrum and the small scale anisotropic 2D spectrum. We run test particle simulations in the magnetid field generated in this way to estimate the transport coefficients and to compare with the isotropic turbulence model.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
Functional wiring of the yeast kinome revealed by global analysis of genetic network motifs
Sharifpoor, Sara; van Dyk, Dewald; Costanzo, Michael; Baryshnikova, Anastasia; Friesen, Helena; Douglas, Alison C.; Youn, Ji-Young; VanderSluis, Benjamin; Myers, Chad L.; Papp, Balázs; Boone, Charles; Andrews, Brenda J.
2012-01-01
A combinatorial genetic perturbation strategy was applied to interrogate the yeast kinome on a genome-wide scale. We assessed the global effects of gene overexpression or gene deletion to map an integrated genetic interaction network of synthetic dosage lethal (SDL) and loss-of-function genetic interactions (GIs) for 92 kinases, producing a meta-network of 8700 GIs enriched for pathways known to be regulated by cognate kinases. Kinases most sensitive to dosage perturbations had constitutive cell cycle or cell polarity functions under standard growth conditions. Condition-specific screens confirmed that the spectrum of kinase dosage interactions can be expanded substantially in activating conditions. An integrated network composed of systematic SDL, negative and positive loss-of-function GIs, and literature-curated kinase–substrate interactions revealed kinase-dependent regulatory motifs predictive of novel gene-specific phenotypes. Our study provides a valuable resource to unravel novel functional relationships and pathways regulated by kinases and outlines a general strategy for deciphering mutant phenotypes from large-scale GI networks. PMID:22282571
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Wu, Di; Lau, K.- M.; Tao, Wei-Kuo
2016-01-01
Large-scale forcing and land-atmosphere interactions on precipitation are investigated with NASA-Unified WRF (NU-WRF) simulations during fast transitions of ENSO phases from spring to early summer of 2010 and 2011. The model is found to capture major precipitation episodes in the 3-month simulations without resorting to nudging. However, the mean intensity of the simulated precipitation is underestimated by 46% and 57% compared with the observations in dry and wet regions in the southwestern and south-central United States, respectively. Sensitivity studies show that large-scale atmospheric forcing plays a major role in producing regional precipitation. A methodology to account for moisture contributions to individual precipitation events, as well as total precipitation, is presented under the same moisture budget framework. The analysis shows that the relative contributions of local evaporation and large-scale moisture convergence depend on the dry/wet regions and are a function of temporal and spatial scales. While the ratio of local and large-scale moisture contributions vary with domain size and weather system, evaporation provides a major moisture source in the dry region and during light rain events, which leads to greater sensitivity to soil moisture in the dry region and during light rain events. The feedback of land surface processes to large-scale forcing is well simulated, as indicated by changes in atmospheric circulation and moisture convergence. Overall, the results reveal an asymmetrical response of precipitation events to soil moisture, with higher sensitivity under dry than wet conditions. Drier soil moisture tends to suppress further existing below-normal precipitation conditions via a positive soil moisture-land surface flux feedback that could worsen drought conditions in the southwestern United States.
Agrobacterium-mediated virus-induced gene silencing assay in cotton.
Gao, Xiquan; Britt, Robert C; Shan, Libo; He, Ping
2011-08-20
Cotton (Gossypium hirsutum) is one of the most important crops worldwide. Considerable efforts have been made on molecular breeding of new varieties. The large-scale gene functional analysis in cotton has been lagged behind most of the modern plant species, likely due to its large size of genome, gene duplication and polyploidy, long growth cycle and recalcitrance to genetic transformation(1). To facilitate high throughput functional genetic/genomic study in cotton, we attempt to develop rapid and efficient transient assays to assess cotton gene functions. Virus-Induced Gene Silencing (VIGS) is a powerful technique that was developed based on the host Post-Transcriptional Gene Silencing (PTGS) to repress viral proliferation(2,3). Agrobacterium-mediated VIGS has been successfully applied in a wide range of dicots species such as Solanaceae, Arabidopsis and legume species, and monocots species including barley, wheat and maize, for various functional genomic studies(3,4). As this rapid and efficient approach avoids plant transformation and overcomes functional redundancy, it is particularly attractive and suitable for functional genomic study in crop species like cotton not amenable for transformation. In this study, we report the detailed protocol of Agrobacterium-mediated VIGS system in cotton. Among the several viral VIGS vectors, the tobacco rattle virus (TRV) invades a wide range of hosts and is able to spread vigorously throughout the entire plant yet produce mild symptoms on the hosts5. To monitor the silencing efficiency, GrCLA1, a homolog gene of Arabidopsis Cloroplastos alterados 1 gene (AtCLA1) in cotton, has been cloned and inserted into the VIGS binary vector pYL156. CLA1 gene is involved in chloroplast development(6), and previous studies have shown that loss-of-function of AtCLA1 resulted in an albino phenotype on true leaves(7), providing an excellent visual marker for silencing efficiency. At approximately two weeks post Agrobacterium infiltration, the albino phenotype started to appear on the true leaves, with 100% silencing efficiency in all replicated experiments. The silencing of endogenous gene expression was also confirmed by RT-PCR analysis. Significantly, silencing could potently occur in all the cultivars we tested, including various commercially grown varieties in Texas. This rapid and efficient Agrobacterium-mediated VIGS assay provides a very powerful tool for rapid large-scale analysis of gene functions at genome-wide level in cotton.
Agrobacterium-Mediated Virus-Induced Gene Silencing Assay In Cotton
Gao, Xiquan; Britt Jr., Robert C.; Shan, Libo; He, Ping
2011-01-01
Cotton (Gossypium hirsutum) is one of the most important crops worldwide. Considerable efforts have been made on molecular breeding of new varieties. The large-scale gene functional analysis in cotton has been lagged behind most of the modern plant species, likely due to its large size of genome, gene duplication and polyploidy, long growth cycle and recalcitrance to genetic transformation1. To facilitate high throughput functional genetic/genomic study in cotton, we attempt to develop rapid and efficient transient assays to assess cotton gene functions. Virus-Induced Gene Silencing (VIGS) is a powerful technique that was developed based on the host Post-Transcriptional Gene Silencing (PTGS) to repress viral proliferation2,3. Agrobacterium-mediated VIGS has been successfully applied in a wide range of dicots species such as Solanaceae, Arabidopsis and legume species, and monocots species including barley, wheat and maize, for various functional genomic studies3,4. As this rapid and efficient approach avoids plant transformation and overcomes functional redundancy, it is particularly attractive and suitable for functional genomic study in crop species like cotton not amenable for transformation. In this study, we report the detailed protocol of Agrobacterium-mediated VIGS system in cotton. Among the several viral VIGS vectors, the tobacco rattle virus (TRV) invades a wide range of hosts and is able to spread vigorously throughout the entire plant yet produce mild symptoms on the hosts5. To monitor the silencing efficiency, GrCLA1, a homolog gene of Arabidopsis Cloroplastos alterados 1 gene (AtCLA1) in cotton, has been cloned and inserted into the VIGS binary vector pYL156. CLA1 gene is involved in chloroplast development6, and previous studies have shown that loss-of-function of AtCLA1 resulted in an albino phenotype on true leaves7, providing an excellent visual marker for silencing efficiency. At approximately two weeks post Agrobacterium infiltration, the albino phenotype started to appear on the true leaves, with 100% silencing efficiency in all replicated experiments. The silencing of endogenous gene expression was also confirmed by RT-PCR analysis. Significantly, silencing could potently occur in all the cultivars we tested, including various commercially grown varieties in Texas. This rapid and efficient Agrobacterium-mediated VIGS assay provides a very powerful tool for rapid large-scale analysis of gene functions at genome-wide level in cotton. PMID:21876527
Feng, Jun-Tao; Liu, Han-Qiu; Hua, Xu-Yun; Gu, Yu-Dong; Xu, Jian-Guang; Xu, Wen-Dong
2016-12-01
Brachial plexus injury (BPI) is a type of severe peripheral nerve trauma that leads to central remodeling in the brain, as revealed by functional MRI analysis. However, previously reported remodeling is mostly restricted to sensorimotor areas of the brain. Whether this disturbance in the sensorimotor network leads to larger-scale functional remodeling remains unknown. We sought to explore the higher-level brain functional abnormality pattern of BPI patients from a large-scale network function connectivity dimension in 15 right-handed BPI patients. Resting-state functional MRI data were collected and analyzed using independent component analysis methods. Five components of interest were recognized and compared between patients and healthy subjects. Patients showed significantly altered brain local functional activities in the bilateral fronto-parietal network (FPN), sensorimotor network (SMN), and executive-control network (ECN) compared with healthy subjects. Moreover, functional connectivity between SMN and ECN were significantly less in patients compared with healthy subjects, and connectivity strength between ECN and SMN was negatively correlated with patients' residual function of the affected limb. Functional connectivity between SMN and right FPN were also significantly less than in controls, although connectivity between ECN and default mode network (DMN) was greater than in controls. These data suggested that brain functional disturbance in BPI patients extends beyond the sensorimotor network and cascades serial remodeling in the brain, which significantly correlates with residual hand function of the paralyzed limb. Furthermore, functional remodeling in these higher-level functional networks may lead to cognitive alterations in complex tasks.
Mitchell, Joshua M.; Fan, Teresa W.-M.; Lane, Andrew N.; Moseley, Hunter N. B.
2014-01-01
Large-scale identification of metabolites is key to elucidating and modeling metabolism at the systems level. Advances in metabolomics technologies, particularly ultra-high resolution mass spectrometry (MS) enable comprehensive and rapid analysis of metabolites. However, a significant barrier to meaningful data interpretation is the identification of a wide range of metabolites including unknowns and the determination of their role(s) in various metabolic networks. Chemoselective (CS) probes to tag metabolite functional groups combined with high mass accuracy provide additional structural constraints for metabolite identification and quantification. We have developed a novel algorithm, Chemically Aware Substructure Search (CASS) that efficiently detects functional groups within existing metabolite databases, allowing for combined molecular formula and functional group (from CS tagging) queries to aid in metabolite identification without a priori knowledge. Analysis of the isomeric compounds in both Human Metabolome Database (HMDB) and KEGG Ligand demonstrated a high percentage of isomeric molecular formulae (43 and 28%, respectively), indicating the necessity for techniques such as CS-tagging. Furthermore, these two databases have only moderate overlap in molecular formulae. Thus, it is prudent to use multiple databases in metabolite assignment, since each major metabolite database represents different portions of metabolism within the biosphere. In silico analysis of various CS-tagging strategies under different conditions for adduct formation demonstrate that combined FT-MS derived molecular formulae and CS-tagging can uniquely identify up to 71% of KEGG and 37% of the combined KEGG/HMDB database vs. 41 and 17%, respectively without adduct formation. This difference between database isomer disambiguation highlights the strength of CS-tagging for non-lipid metabolite identification. However, unique identification of complex lipids still needs additional information. PMID:25120557
Large-scale brain networks are distinctly affected in right and left mesial temporal lobe epilepsy.
de Campos, Brunno Machado; Coan, Ana Carolina; Lin Yasuda, Clarissa; Casseb, Raphael Fernandes; Cendes, Fernando
2016-09-01
Mesial temporal lobe epilepsy (MTLE) with hippocampus sclerosis (HS) is associated with functional and structural alterations extending beyond the temporal regions and abnormal pattern of brain resting state networks (RSNs) connectivity. We hypothesized that the interaction of large-scale RSNs is differently affected in patients with right- and left-MTLE with HS compared to controls. We aimed to determine and characterize these alterations through the analysis of 12 RSNs, functionally parceled in 70 regions of interest (ROIs), from resting-state functional-MRIs of 99 subjects (52 controls, 26 right- and 21 left-MTLE patients with HS). Image preprocessing and statistical analysis were performed using UF(2) C-toolbox, which provided ROI-wise results for intranetwork and internetwork connectivity. Intranetwork abnormalities were observed in the dorsal default mode network (DMN) in both groups of patients and in the posterior salience network in right-MTLE. Both groups showed abnormal correlation between the dorsal-DMN and the posterior salience, as well as between the dorsal-DMN and the executive-control network. Patients with left-MTLE also showed reduced correlation between the dorsal-DMN and visuospatial network and increased correlation between bilateral thalamus and the posterior salience network. The ipsilateral hippocampus stood out as a central area of abnormalities. Alterations on left-MTLE expressed a low cluster coefficient, whereas the altered connections on right-MTLE showed low cluster coefficient in the DMN but high in the posterior salience regions. Both right- and left-MTLE patients with HS have widespread abnormal interactions of large-scale brain networks; however, all parameters evaluated indicate that left-MTLE has a more intricate bihemispheric dysfunction compared to right-MTLE. Hum Brain Mapp 37:3137-3152, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
DEMNUni: massive neutrinos and the bispectrum of large scale structures
NASA Astrophysics Data System (ADS)
Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano
2018-03-01
The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.
Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.
Sun, Kangkang; Sui, Shuai; Tong, Shaocheng
2018-04-01
This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit
Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less
A global trait-based approach to estimate leaf nitrogen functional allocation from observations
Ghimire, Bardan; Riley, William J.; Koven, Charles D.; ...
2017-03-28
Nitrogen is one of the most important nutrients for plant growth and a major constituent of proteins that regulate photosynthetic and respiratory processes. However, a comprehensive global analysis of nitrogen allocation in leaves for major processes with respect to different plant functional types is currently lacking. This study integrated observations from global databases with photosynthesis and respiration models to determine plant-functional-type-specific allocation patterns of leaf nitrogen for photosynthesis (Rubisco, electron transport, light absorption) and respiration (growth and maintenance), and by difference from observed total leaf nitrogen, an unexplained “residual” nitrogen pool. Based on our analysis, crops partition the largest fractionmore » of nitrogen to photosynthesis (57%) and respiration (5%) followed by herbaceous plants (44% and 4%). Tropical broadleaf evergreen trees partition the least to photosynthesis (25%) and respiration (2%) followed by needle-leaved evergreen trees (28% and 3%). In trees (especially needle-leaved evergreen and tropical broadleaf evergreen trees) a large fraction (70% and 73% respectively) of nitrogen was not explained by photosynthetic or respiratory functions. Compared to crops and herbaceous plants, this large residual pool is hypothesized to emerge from larger investments in cell wall proteins, lipids, amino acids, nucleic acid, CO2 fixation proteins (other than Rubisco), secondary compounds, and other proteins. Our estimates are different from previous studies due to differences in methodology and assumptions used in deriving nitrogen allocation estimates. Unlike previous studies, we integrate and infer nitrogen allocation estimates across multiple plant functional types, and report substantial differences in nitrogen allocation across different plant functional types. Furthermore, the resulting pattern of nitrogen allocation provides insights on mechanisms that operate at a cellular scale within leaves, and can be integrated with ecosystem models to derive emergent properties of ecosystem productivity at local, regional, and global scales.« less
A global trait-based approach to estimate leaf nitrogen functional allocation from observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghimire, Bardan; Riley, William J.; Koven, Charles D.
Nitrogen is one of the most important nutrients for plant growth and a major constituent of proteins that regulate photosynthetic and respiratory processes. However, a comprehensive global analysis of nitrogen allocation in leaves for major processes with respect to different plant functional types is currently lacking. This study integrated observations from global databases with photosynthesis and respiration models to determine plant-functional-type-specific allocation patterns of leaf nitrogen for photosynthesis (Rubisco, electron transport, light absorption) and respiration (growth and maintenance), and by difference from observed total leaf nitrogen, an unexplained “residual” nitrogen pool. Based on our analysis, crops partition the largest fractionmore » of nitrogen to photosynthesis (57%) and respiration (5%) followed by herbaceous plants (44% and 4%). Tropical broadleaf evergreen trees partition the least to photosynthesis (25%) and respiration (2%) followed by needle-leaved evergreen trees (28% and 3%). In trees (especially needle-leaved evergreen and tropical broadleaf evergreen trees) a large fraction (70% and 73% respectively) of nitrogen was not explained by photosynthetic or respiratory functions. Compared to crops and herbaceous plants, this large residual pool is hypothesized to emerge from larger investments in cell wall proteins, lipids, amino acids, nucleic acid, CO2 fixation proteins (other than Rubisco), secondary compounds, and other proteins. Our estimates are different from previous studies due to differences in methodology and assumptions used in deriving nitrogen allocation estimates. Unlike previous studies, we integrate and infer nitrogen allocation estimates across multiple plant functional types, and report substantial differences in nitrogen allocation across different plant functional types. Furthermore, the resulting pattern of nitrogen allocation provides insights on mechanisms that operate at a cellular scale within leaves, and can be integrated with ecosystem models to derive emergent properties of ecosystem productivity at local, regional, and global scales.« less
Exploring connectivity with large-scale Granger causality on resting-state functional MRI.
DSouza, Adora M; Abidin, Anas Z; Leistritz, Lutz; Wismüller, Axel
2017-08-01
Large-scale Granger causality (lsGC) is a recently developed, resting-state functional MRI (fMRI) connectivity analysis approach that estimates multivariate voxel-resolution connectivity. Unlike most commonly used multivariate approaches, which establish coarse-resolution connectivity by aggregating voxel time-series avoiding an underdetermined problem, lsGC estimates voxel-resolution, fine-grained connectivity by incorporating an embedded dimension reduction. We investigate application of lsGC on realistic fMRI simulations, modeling smoothing of neuronal activity by the hemodynamic response function and repetition time (TR), and empirical resting-state fMRI data. Subsequently, functional subnetworks are extracted from lsGC connectivity measures for both datasets and validated quantitatively. We also provide guidelines to select lsGC free parameters. Results indicate that lsGC reliably recovers underlying network structure with area under receiver operator characteristic curve (AUC) of 0.93 at TR=1.5s for a 10-min session of fMRI simulations. Furthermore, subnetworks of closely interacting modules are recovered from the aforementioned lsGC networks. Results on empirical resting-state fMRI data demonstrate recovery of visual and motor cortex in close agreement with spatial maps obtained from (i) visuo-motor fMRI stimulation task-sequence (Accuracy=0.76) and (ii) independent component analysis (ICA) of resting-state fMRI (Accuracy=0.86). Compared with conventional Granger causality approach (AUC=0.75), lsGC produces better network recovery on fMRI simulations. Furthermore, it cannot recover functional subnetworks from empirical fMRI data, since quantifying voxel-resolution connectivity is not possible as consequence of encountering an underdetermined problem. Functional network recovery from fMRI data suggests that lsGC gives useful insight into connectivity patterns from resting-state fMRI at a multivariate voxel-resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chockanathan, Udaysankar; DSouza, Adora M.; Abidin, Anas Z.; Schifitto, Giovanni; Wismüller, Axel
2018-02-01
Resting-state functional MRI (rs-fMRI), coupled with advanced multivariate time-series analysis methods such as Granger causality, is a promising tool for the development of novel functional connectivity biomarkers of neurologic and psychiatric disease. Recently large-scale Granger causality (lsGC) has been proposed as an alternative to conventional Granger causality (cGC) that extends the scope of robust Granger causal analyses to high-dimensional systems such as the human brain. In this study, lsGC and cGC were comparatively evaluated on their ability to capture neurologic damage associated with HIV-associated neurocognitive disorders (HAND). Functional brain network models were constructed from rs-fMRI data collected from a cohort of HIV+ and HIV- subjects. Graph theoretic properties of the resulting networks were then used to train a support vector machine (SVM) model to predict clinically relevant parameters, such as HIV status and neuropsychometric (NP) scores. For the HIV+/- classification task, lsGC, which yielded a peak area under the receiver operating characteristic curve (AUC) of 0.83, significantly outperformed cGC, which yielded a peak AUC of 0.61, at all parameter settings tested. For the NP score regression task, lsGC, with a minimum mean squared error (MSE) of 0.75, significantly outperformed cGC, with a minimum MSE of 0.84 (p < 0.001, one-tailed paired t-test). These results show that, at optimal parameter settings, lsGC is better able to capture functional brain connectivity correlates of HAND than cGC. However, given the substantial variation in the performance of the two methods at different parameter settings, particularly for the regression task, improved parameter selection criteria are necessary and constitute an area for future research.
Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.« less
Contextual Compression of Large-Scale Wind Turbine Array Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.« less
Molecular inversion probe assay.
Absalan, Farnaz; Ronaghi, Mostafa
2007-01-01
We have described molecular inversion probe technologies for large-scale genetic analyses. This technique provides a comprehensive and powerful tool for the analysis of genetic variation and enables affordable, large-scale studies that will help uncover the genetic basis of complex disease and explain the individual variation in response to therapeutics. Major applications of the molecular inversion probes (MIP) technologies include targeted genotyping from focused regions to whole-genome studies, and allele quantification of genomic rearrangements. The MIP technology (used in the HapMap project) provides an efficient, scalable, and affordable way to score polymorphisms in case/control populations for genetic studies. The MIP technology provides the highest commercially available multiplexing levels and assay conversion rates for targeted genotyping. This enables more informative, genome-wide studies with either the functional (direct detection) approach or the indirect detection approach.
Manoharan, Lokeshwaran; Kushwaha, Sandeep K.; Hedlund, Katarina; Ahrén, Dag
2015-01-01
Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. PMID:26490729
TUBEs-Mass Spectrometry for Identification and Analysis of the Ubiquitin-Proteome.
Azkargorta, Mikel; Escobes, Iraide; Elortza, Felix; Matthiesen, Rune; Rodríguez, Manuel S
2016-01-01
Mass spectrometry (MS) has become the method of choice for the large-scale analysis of protein ubiquitylation. There exist a number of proposed methods for mapping ubiquitin sites, each with different pros and cons. We present here a protocol for the MS analysis of the ubiquitin-proteome captured by TUBEs and subsequent data analysis. Using dedicated software and algorithms, specific information on the presence of ubiquitylated peptides can be obtained from the MS search results. In addition, a quantitative and functional analysis of the ubiquitylated proteins and their interacting partners helps to unravel the biological and molecular processes they are involved in.
NASA Astrophysics Data System (ADS)
Alberts, Samantha J.
The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.
Phillips, J. C.
2014-01-01
Influenza virus contains two highly variable envelope glycoproteins, hemagglutinin (HA) and neuraminidase (NA). The structure and properties of HA, which is responsible for binding the virus to the cell that is being infected, change significantly when the virus is transmitted from avian or swine species to humans. Here we focus first on the simpler problem of the much smaller human individual evolutionary amino acid mutational changes in NA, which cleaves sialic acid groups and is required for influenza virus replication. Our thermodynamic panorama shows that very small amino acid changes can be monitored very accurately across many historic (1945–2011) Uniprot and NCBI strains using hydropathicity scales to quantify the roughness of water film packages. Quantitative sequential analysis is most effective with the fractal differential hydropathicity scale based on protein self-organized criticality (SOC). Our analysis shows that large-scale vaccination programs have been responsible for a very large convergent reduction in common influenza severity in the last century. Hydropathic analysis is capable of interpreting and even predicting trends of functional changes in mutation prolific viruses directly from amino acid sequences alone. An engineered strain of NA1 is described which could well be significantly less virulent than current circulating strains. PMID:25143953
Pao, Sheng-Ying; Lin, Win-Li; Hwang, Ming-Jing
2006-01-01
Background Screening for differentially expressed genes on the genomic scale and comparative analysis of the expression profiles of orthologous genes between species to study gene function and regulation are becoming increasingly feasible. Expressed sequence tags (ESTs) are an excellent source of data for such studies using bioinformatic approaches because of the rich libraries and tremendous amount of data now available in the public domain. However, any large-scale EST-based bioinformatics analysis must deal with the heterogeneous, and often ambiguous, tissue and organ terms used to describe EST libraries. Results To deal with the issue of tissue source, in this work, we carefully screened and organized more than 8 million human and mouse ESTs into 157 human and 108 mouse tissue/organ categories, to which we applied an established statistic test using different thresholds of the p value to identify genes differentially expressed in different tissues. Further analysis of the tissue distribution and level of expression of human and mouse orthologous genes showed that tissue-specific orthologs tended to have more similar expression patterns than those lacking significant tissue specificity. On the other hand, a number of orthologs were found to have significant disparity in their expression profiles, hinting at novel functions, divergent regulation, or new ortholog relationships. Conclusion Comprehensive statistics on the tissue-specific expression of human and mouse genes were obtained in this very large-scale, EST-based analysis. These statistical results have been organized into a database, freely accessible at our website , for easy searching of human and mouse tissue-specific genes and for investigating gene expression profiles in the context of comparative genomics. Comparative analysis showed that, although highly tissue-specific genes tend to exhibit similar expression profiles in human and mouse, there are significant exceptions, indicating that orthologous genes, while sharing basic genomic properties, could result in distinct phenotypes. PMID:16626500
Dhanyalakshmi, K H; Naika, Mahantesha B N; Sajeevan, R S; Mathew, Oommen K; Shafi, K Mohamed; Sowdhamini, Ramanathan; N Nataraja, Karaba
2016-01-01
The modern sequencing technologies are generating large volumes of information at the transcriptome and genome level. Translation of this information into a biological meaning is far behind the race due to which a significant portion of proteins discovered remain as proteins of unknown function (PUFs). Attempts to uncover the functional significance of PUFs are limited due to lack of easy and high throughput functional annotation tools. Here, we report an approach to assign putative functions to PUFs, identified in the transcriptome of mulberry, a perennial tree commonly cultivated as host of silkworm. We utilized the mulberry PUFs generated from leaf tissues exposed to drought stress at whole plant level. A sequence and structure based computational analysis predicted the probable function of the PUFs. For rapid and easy annotation of PUFs, we developed an automated pipeline by integrating diverse bioinformatics tools, designated as PUFs Annotation Server (PUFAS), which also provides a web service API (Application Programming Interface) for a large-scale analysis up to a genome. The expression analysis of three selected PUFs annotated by the pipeline revealed abiotic stress responsiveness of the genes, and hence their potential role in stress acclimation pathways. The automated pipeline developed here could be extended to assign functions to PUFs from any organism in general. PUFAS web server is available at http://caps.ncbs.res.in/pufas/ and the web service is accessible at http://capservices.ncbs.res.in/help/pufas.
Rational functional representation of flap noise spectra including correction for reflection effects
NASA Technical Reports Server (NTRS)
Miles, J. H.
1974-01-01
A rational function is presented for the acoustic spectra generated by deflection of engine exhaust jets for under-the-wing and over-the-wing versions of externally blown flaps. The functional representation is intended to provide a means for compact storage of data and for data analysis. The expressions are based on Fourier transform functions for the Strouhal normalized pressure spectral density, and on a correction for reflection effects based on Thomas' (1969) N-independent-source model extended by use of a reflected ray transfer function. Curve fit comparisons are presented for blown-flap data taken from turbofan engine tests and from large-scale cold-flow model tests. Application of the rational function to scrubbing noise theory is also indicated.
Adjacent-Categories Mokken Models for Rater-Mediated Assessments
Wind, Stefanie A.
2016-01-01
Molenaar extended Mokken’s original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken’s original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are used, including rater-mediated educational assessments. Because their underlying item step response functions (i.e., category response functions) are defined using cumulative probabilities, polytomous Mokken models can be classified as cumulative models based on the classifications of polytomous item response theory models proposed by several scholars. In order to permit a closer conceptual alignment with educational performance assessments, this study presents an adjacent-categories variation on the polytomous monotone homogeneity and double monotonicity models. Data from a large-scale rater-mediated writing assessment are used to illustrate the adjacent-categories approach, and results are compared with the original formulations. Major findings suggest that the adjacent-categories models provide additional diagnostic information related to individual raters’ use of rating scale categories that is not observed under the original formulation. Implications are discussed in terms of methods for evaluating rating quality. PMID:29795916
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP; Soares, Thereza A.
2007-12-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP
2008-03-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
USDA-ARS?s Scientific Manuscript database
Water quality modeling requires across-scale support of combined digital soil elements and simulation parameters. This paper presents the unprecedented development of a large spatial scale (1:250,000) ArcGIS geodatabase coverage designed as a functional repository of soil-parameters for modeling an...
Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy
2008-09-01
Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies.
Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding
Tajima, Satohiro; Yanagawa, Toru; Fujii, Naotaka; Toyoizumi, Taro
2015-01-01
Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness. PMID:26584045
News from the proton - recent DIS results from HERA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, K.
1997-01-01
Recent results from the two large general-purpose detectors H1 and ZEUS at HERA (DESY, Hamburg, Germany) are presented. Emphasis is given to the analysis of deep inelastic scattering defined by the observation of the scattered electron or positron in the main calorimeters. Results on purely inclusive cross sections lead to a determination of the charged (quarks) parton distribution F{sub 2}(x, Q{sup 2}). Access to the electrically neutral parton content (gluons) is obtained indirectly by an analysis of the expected scaling violation behavior of F{sub 2} or directly from multijet rates originating from well-defined initial parton configurations. Finally, the recently uncoveredmore » subclass of large rapidity gap (LRG) events has been analyzed in terms of F{sub 2}. The result supports the concept of a color neutral object (Pomeron IP) being probed by a hard scattering electron. Evidence for factorization of the Pomeron radiation process as well as for scaling in the inclusive IP structure functions has been found.« less
Functional regression method for whole genome eQTL epistasis analysis with sequencing data.
Xu, Kelin; Jin, Li; Xiong, Momiao
2017-05-18
Epistasis plays an essential rule in understanding the regulation mechanisms and is an essential component of the genetic architecture of the gene expressions. However, interaction analysis of gene expressions remains fundamentally unexplored due to great computational challenges and data availability. Due to variation in splicing, transcription start sites, polyadenylation sites, post-transcriptional RNA editing across the entire gene, and transcription rates of the cells, RNA-seq measurements generate large expression variability and collectively create the observed position level read count curves. A single number for measuring gene expression which is widely used for microarray measured gene expression analysis is highly unlikely to sufficiently account for large expression variation across the gene. Simultaneously analyzing epistatic architecture using the RNA-seq and whole genome sequencing (WGS) data poses enormous challenges. We develop a nonlinear functional regression model (FRGM) with functional responses where the position-level read counts within a gene are taken as a function of genomic position, and functional predictors where genotype profiles are viewed as a function of genomic position, for epistasis analysis with RNA-seq data. Instead of testing the interaction of all possible pair-wises SNPs, the FRGM takes a gene as a basic unit for epistasis analysis, which tests for the interaction of all possible pairs of genes and use all the information that can be accessed to collectively test interaction between all possible pairs of SNPs within two genome regions. By large-scale simulations, we demonstrate that the proposed FRGM for epistasis analysis can achieve the correct type 1 error and has higher power to detect the interactions between genes than the existing methods. The proposed methods are applied to the RNA-seq and WGS data from the 1000 Genome Project. The numbers of pairs of significantly interacting genes after Bonferroni correction identified using FRGM, RPKM and DESeq were 16,2361, 260 and 51, respectively, from the 350 European samples. The proposed FRGM for epistasis analysis of RNA-seq can capture isoform and position-level information and will have a broad application. Both simulations and real data analysis highlight the potential for the FRGM to be a good choice of the epistatic analysis with sequencing data.
Damaraju, E; Allen, E A; Belger, A; Ford, J M; McEwen, S; Mathalon, D H; Mueller, B A; Pearlson, G D; Potkin, S G; Preda, A; Turner, J A; Vaidya, J G; van Erp, T G; Calhoun, V D
2014-01-01
Schizophrenia is a psychotic disorder characterized by functional dysconnectivity or abnormal integration between distant brain regions. Recent functional imaging studies have implicated large-scale thalamo-cortical connectivity as being disrupted in patients. However, observed connectivity differences in schizophrenia have been inconsistent between studies, with reports of hyperconnectivity and hypoconnectivity between the same brain regions. Using resting state eyes-closed functional imaging and independent component analysis on a multi-site data that included 151 schizophrenia patients and 163 age- and gender matched healthy controls, we decomposed the functional brain data into 100 components and identified 47 as functionally relevant intrinsic connectivity networks. We subsequently evaluated group differences in functional network connectivity, both in a static sense, computed as the pairwise Pearson correlations between the full network time courses (5.4 minutes in length), and a dynamic sense, computed using sliding windows (44 s in length) and k-means clustering to characterize five discrete functional connectivity states. Static connectivity analysis revealed that compared to healthy controls, patients show significantly stronger connectivity, i.e., hyperconnectivity, between the thalamus and sensory networks (auditory, motor and visual), as well as reduced connectivity (hypoconnectivity) between sensory networks from all modalities. Dynamic analysis suggests that (1), on average, schizophrenia patients spend much less time than healthy controls in states typified by strong, large-scale connectivity, and (2), that abnormal connectivity patterns are more pronounced during these connectivity states. In particular, states exhibiting cortical-subcortical antagonism (anti-correlations) and strong positive connectivity between sensory networks are those that show the group differences of thalamic hyperconnectivity and sensory hypoconnectivity. Group differences are weak or absent during other connectivity states. Dynamic analysis also revealed hypoconnectivity between the putamen and sensory networks during the same states of thalamic hyperconnectivity; notably, this finding cannot be observed in the static connectivity analysis. Finally, in post-hoc analyses we observed that the relationships between sub-cortical low frequency power and connectivity with sensory networks is altered in patients, suggesting different functional interactions between sub-cortical nuclei and sensorimotor cortex during specific connectivity states. While important differences between patients with schizophrenia and healthy controls have been identified, one should interpret the results with caution given the history of medication in patients. Taken together, our results support and expand current knowledge regarding dysconnectivity in schizophrenia, and strongly advocate the use of dynamic analyses to better account for and understand functional connectivity differences.
Damaraju, E.; Allen, E.A.; Belger, A.; Ford, J.M.; McEwen, S.; Mathalon, D.H.; Mueller, B.A.; Pearlson, G.D.; Potkin, S.G.; Preda, A.; Turner, J.A.; Vaidya, J.G.; van Erp, T.G.; Calhoun, V.D.
2014-01-01
Schizophrenia is a psychotic disorder characterized by functional dysconnectivity or abnormal integration between distant brain regions. Recent functional imaging studies have implicated large-scale thalamo-cortical connectivity as being disrupted in patients. However, observed connectivity differences in schizophrenia have been inconsistent between studies, with reports of hyperconnectivity and hypoconnectivity between the same brain regions. Using resting state eyes-closed functional imaging and independent component analysis on a multi-site data that included 151 schizophrenia patients and 163 age- and gender matched healthy controls, we decomposed the functional brain data into 100 components and identified 47 as functionally relevant intrinsic connectivity networks. We subsequently evaluated group differences in functional network connectivity, both in a static sense, computed as the pairwise Pearson correlations between the full network time courses (5.4 minutes in length), and a dynamic sense, computed using sliding windows (44 s in length) and k-means clustering to characterize five discrete functional connectivity states. Static connectivity analysis revealed that compared to healthy controls, patients show significantly stronger connectivity, i.e., hyperconnectivity, between the thalamus and sensory networks (auditory, motor and visual), as well as reduced connectivity (hypoconnectivity) between sensory networks from all modalities. Dynamic analysis suggests that (1), on average, schizophrenia patients spend much less time than healthy controls in states typified by strong, large-scale connectivity, and (2), that abnormal connectivity patterns are more pronounced during these connectivity states. In particular, states exhibiting cortical–subcortical antagonism (anti-correlations) and strong positive connectivity between sensory networks are those that show the group differences of thalamic hyperconnectivity and sensory hypoconnectivity. Group differences are weak or absent during other connectivity states. Dynamic analysis also revealed hypoconnectivity between the putamen and sensory networks during the same states of thalamic hyperconnectivity; notably, this finding cannot be observed in the static connectivity analysis. Finally, in post-hoc analyses we observed that the relationships between sub-cortical low frequency power and connectivity with sensory networks is altered in patients, suggesting different functional interactions between sub-cortical nuclei and sensorimotor cortex during specific connectivity states. While important differences between patients with schizophrenia and healthy controls have been identified, one should interpret the results with caution given the history of medication in patients. Taken together, our results support and expand current knowledge regarding dysconnectivity in schizophrenia, and strongly advocate the use of dynamic analyses to better account for and understand functional connectivity differences. PMID:25161896
Using Galaxy to Perform Large-Scale Interactive Data Analyses
Hillman-Jackson, Jennifer; Clements, Dave; Blankenberg, Daniel; Taylor, James; Nekrutenko, Anton
2014-01-01
Innovations in biomedical research technologies continue to provide experimental biologists with novel and increasingly large genomic and high-throughput data resources to be analyzed. As creating and obtaining data has become easier, the key decision faced by many researchers is a practical one: where and how should an analysis be performed? Datasets are large and analysis tool set-up and use is riddled with complexities outside of the scope of core research activities. The authors believe that Galaxy provides a powerful solution that simplifies data acquisition and analysis in an intuitive Web application, granting all researchers access to key informatics tools previously only available to computational specialists working in Unix-based environments. We will demonstrate through a series of biomedically relevant protocols how Galaxy specifically brings together (1) data retrieval from public and private sources, for example, UCSC's Eukaryote and Microbial Genome Browsers, (2) custom tools (wrapped Unix functions, format standardization/conversions, interval operations), and 3rd-party analysis tools. PMID:22700312
Lagrangian space consistency relation for large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Bart; Hui, Lam; Xiao, Xiao, E-mail: bh2478@columbia.edu, E-mail: lh399@columbia.edu, E-mail: xx2146@columbia.edu
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Sharma, Parichit; Mantri, Shrikant S
2014-01-01
The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis.
Sharma, Parichit; Mantri, Shrikant S.
2014-01-01
The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis. PMID:24979410
Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.
Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta
2014-07-01
We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.
Black holes from large N singlet models
NASA Astrophysics Data System (ADS)
Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico
2018-03-01
The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
Influence of topographic heterogeneity on the abandance of larch forest in eastern Siberia
NASA Astrophysics Data System (ADS)
Sato, H.; Kobayashi, H.
2016-12-01
In eastern Siberia, larches (Larix spp.) often exist in pure stands, constructing the world's largest coniferous forest, of which changes can significantly affect the earth's albedo and the global carbon balance. We have conducted simulation studies for this vegetation, aiming to forecast its structures and functions under changing climate (1, 2). In previous studies of simulating vegetation at large geographical scales, the examining area is divided into coarse grid cells such as 0.5 * 0.5 degree resolution, and topographical heterogeneities within each grid cell are just ignored. However, in Siberian larch area, which is located on the environmental edge of existence of forest ecosystem, abundance of larch trees largely depends on topographic condition at the scale of tens to hundreds meters. We, therefore, analyzed patterns of within-grid-scale heterogeneity of larch LAI as a function of topographic condition, and examined its underlying reason. For this analysis, larch LAI was estimated for each 1/112 degree from the SPOT-VEGETATION data, and topographic properties such as angularity and aspect direction were estimated form the ASTER-GDEM data. Through this analysis, we found that, for example, sign of correlation between angularity and larch LAI depends on hydrological condition on the grid cell. We then refined the hydrological sub-model of our vegetation model SEIB-DGVM, and validated whether the modified model can reconstruct these patterns, and examined its impact on the estimation of biomass and vegetation productivity of entire larch region. -- References --1. Sato, H., et al. (2010). "Simulation study of the vegetation structure and function in eastern Siberian larch forests using the individual-based vegetation model SEIB-DGVM." Forest Ecology and Management 259(3): 301-311.2. Sato, H., et al. (2016). "Endurance of larch forest ecosystems in eastern Siberia under warming trends." Ecology and Evolution
Survival analysis for a large scale forest health issue: Missouri oak decline
C.W. Woodall; P.L. Grambsch; W. Thomas; W.K. Moser
2005-01-01
Survival analysis methodologies provide novel approaches for forest mortality analysis that may aid in detecting, monitoring, and mitigating of large-scale forest health issues. This study examined survivor analysis for evaluating a regional forest health issue - Missouri oak decline. With a statewide Missouri forest inventory, log-rank tests of the effects of...
Large-scale functional models of visual cortex for remote sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E
Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less
Effects of biasing on the galaxy power spectrum at large scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran Jimenez, Jose; Departamento de Fisica Teorica, Universidad Complutense de Madrid, 28040, Madrid; Durrer, Ruth
2011-05-15
In this paper we study the effect of biasing on the power spectrum at large scales. We show that even though nonlinear biasing does introduce a white noise contribution on large scales, the P(k){proportional_to}k{sup n} behavior of the matter power spectrum on large scales may still be visible and above the white noise for about one decade. We show, that the Kaiser biasing scheme which leads to linear bias of the correlation function on large scales, also generates a linear bias of the power spectrum on rather small scales. This is a consequence of the divergence on small scales ofmore » the pure Harrison-Zeldovich spectrum. However, biasing becomes k dependent if we damp the underlying power spectrum on small scales. We also discuss the effect of biasing on the baryon acoustic oscillations.« less
Reduced Global Functional Connectivity of the Medial Prefrontal Cortex in Major Depressive Disorder
Murrough, James W.; Abdallah, Chadi G.; Anticevic, Alan; Collins, Katherine A.; Geha, Paul; Averill, Lynnette A.; Schwartz, Jaclyn; DeWilde, Kaitlin E.; Averill, Christopher; Yang, Genevieve Jia-wei; Wong, Edmund; Tang, Cheuk Y.; Krystal, John H.; Iosifescu, Dan V.; Charney, Dennis S.
2016-01-01
Background Major depressive disorder is a disabling neuropsychiatric condition that is associated with disrupted functional connectivity across brain networks. The precise nature of altered connectivity, however, remains incompletely understood. The current study was designed to examine the coherence of large-scale connectivity in depression using a recently developed technique termed global brain connectivity. Methods A total of 82 subjects, including medication-free patients with major depression (n=57) and healthy volunteers (n=25) underwent functional magnetic resonance imaging with resting data acquisition for functional connectivity analysis. Global brain connectivity was computed as the mean of each voxel’s time series correlation with every other voxel and compared between study groups. Relationships between global connectivity and depressive symptom severity measured using the Montgomery-Åsberg Depression Rating Scale were examined by means of linear correlation. Results Relative to the healthy group, patients with depression evidenced reduced global connectivity bilaterally within multiple regions of medial and lateral prefrontal cortex. The largest between-group difference was observed within the right subgenual anterior cingulate cortex, extending into ventromedial prefrontal cortex bilaterally (Hedges’ g = −1.48, p<0.000001). Within the depressed group, patients with the lowest connectivity evidenced the highest symptom severity within ventromedial prefrontal cortex (r = −0.47, p=0.0005). Conclusions Patients with major depressive evidenced abnormal large-scale functional coherence in the brain that was centered within the subgenual cingulate cortex, and medial prefrontal cortex more broadly. These data extend prior studies of connectivity in depression and demonstrate that functional disconnection of the medial prefrontal cortex is a key pathological feature of the disorder. PMID:27144347
Sparse Zero-Sum Games as Stable Functional Feature Selection
Sokolovska, Nataliya; Teytaud, Olivier; Rizkalla, Salwa; Clément, Karine; Zucker, Jean-Daniel
2015-01-01
In large-scale systems biology applications, features are structured in hidden functional categories whose predictive power is identical. Feature selection, therefore, can lead not only to a problem with a reduced dimensionality, but also reveal some knowledge on functional classes of variables. In this contribution, we propose a framework based on a sparse zero-sum game which performs a stable functional feature selection. In particular, the approach is based on feature subsets ranking by a thresholding stochastic bandit. We provide a theoretical analysis of the introduced algorithm. We illustrate by experiments on both synthetic and real complex data that the proposed method is competitive from the predictive and stability viewpoints. PMID:26325268
The luminosity function for the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1989-01-01
The luminosity function for two complete slices of the extension of the CfA redshift survey is calculated. The nonparametric technique of Lynden-Bell (1971) and Turner (1979) is used to determine the shape for the luminosity function of the 12 deg slice of the redshift survey. The amplitude of the luminosity function is determined, taking large-scale inhomogeneities into account. The effects of the Malmquist bias on a magnitude-limited redshift survey are examined, showing that the random errors in the magnitudes for the 12 deg slice affect both the determination of the luminosity function and the spatial density constrast of large scale structures.
Scaling Deviations for Neutrino Reactions in Aysmptotically Free Field Theories
DOE R&D Accomplishments Database
Wilczek, F. A.; Zee, A.; Treiman, S. B.
1974-11-01
Several aspects of deep inelastic neutrino scattering are discussed in the framework of asymptotically free field theories. We first consider the growth behavior of the total cross sections at large energies. Because of the deviations from strict scaling which are characteristic of such theories the growth need not be linear. However, upper and lower bounds are established which rather closely bracket a linear growth. We next consider in more detail the expected pattern of scaling deviation for the structure functions and, correspondingly, for the differential cross sections. The analysis here is based on certain speculative assumptions. The focus is on qualitative effects of scaling breakdown as they may show up in the X and y distributions. The last section of the paper deals with deviations from the Callan-Gross relation.
Klukas, Christian; Chen, Dijun; Pape, Jean-Michel
2014-01-01
High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1990-01-01
The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.
Study of multi-functional precision optical measuring system for large scale equipment
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi
2017-10-01
The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.
2012-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-located arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in lat/lon bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Hua, H.
2012-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff
Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; ...
2015-06-01
The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond thosemore » of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.« less
Online estimation of the wavefront outer scale profile from adaptive optics telemetry
NASA Astrophysics Data System (ADS)
Guesalaga, A.; Neichel, B.; Correia, C. M.; Butterley, T.; Osborn, J.; Masciadri, E.; Fusco, T.; Sauvage, J.-F.
2017-02-01
We describe an online method to estimate the wavefront outer scale profile, L0(h), for very large and future extremely large telescopes. The stratified information on this parameter impacts the estimation of the main turbulence parameters [turbulence strength, Cn2(h); Fried's parameter, r0; isoplanatic angle, θ0; and coherence time, τ0) and determines the performance of wide-field adaptive optics (AO) systems. This technique estimates L0(h) using data from the AO loop available at the facility instruments by constructing the cross-correlation functions of the slopes between two or more wavefront sensors, which are later fitted to a linear combination of the simulated theoretical layers having different altitudes and outer scale values. We analyse some limitations found in the estimation process: (I) its insensitivity to large values of L0(h) as the telescope becomes blind to outer scales larger than its diameter; (II) the maximum number of observable layers given the limited number of independent inputs that the cross-correlation functions provide and (III) the minimum length of data required for a satisfactory convergence of the turbulence parameters without breaking the assumption of statistical stationarity of the turbulence. The method is applied to the Gemini South multiconjugate AO system that comprises five wavefront sensors and two deformable mirrors. Statistics of L0(h) at Cerro Pachón from data acquired during 3 yr of campaigns show interesting resemblance to other independent results in the literature. A final analysis suggests that the impact of error sources will be substantially reduced in instruments of the next generation of giant telescopes.
Saunders, Rebecca E; Instrell, Rachael; Rispoli, Rossella; Jiang, Ming; Howell, Michael
2013-01-01
High-throughput screening (HTS) uses technologies such as RNA interference to generate loss-of-function phenotypes on a genomic scale. As these technologies become more popular, many research institutes have established core facilities of expertise to deal with the challenges of large-scale HTS experiments. As the efforts of core facility screening projects come to fruition, focus has shifted towards managing the results of these experiments and making them available in a useful format that can be further mined for phenotypic discovery. The HTS-DB database provides a public view of data from screening projects undertaken by the HTS core facility at the CRUK London Research Institute. All projects and screens are described with comprehensive assay protocols, and datasets are provided with complete descriptions of analysis techniques. This format allows users to browse and search data from large-scale studies in an informative and intuitive way. It also provides a repository for additional measurements obtained from screens that were not the focus of the project, such as cell viability, and groups these data so that it can provide a gene-centric summary across several different cell lines and conditions. All datasets from our screens that can be made available can be viewed interactively and mined for further hit lists. We believe that in this format, the database provides researchers with rapid access to results of large-scale experiments that might facilitate their understanding of genes/compounds identified in their own research. DATABASE URL: http://hts.cancerresearchuk.org/db/public.
NASA Astrophysics Data System (ADS)
Kakiichi, Koki; Dijkstra, Mark; Ciardi, Benedetta; Graziani, Luca
2016-12-01
The visibility of Lyα-emitting galaxies during the Epoch of Reionization is controlled by both diffuse H I patches in large-scale bubble morphology and small-scale absorbers. To investigate their impacts on Lyα transfer, we apply a novel combination of analytic modelling and cosmological hydrodynamical, radiative transfer simulations to three reionization models: (I) the `bubble' model, where only diffuse H I outside ionized bubbles is present; (II) the `web' model, where H I exists only in overdense self-shielded gas; and (III) the hybrid `web-bubble' model. The three models can explain the observed Lyα luminosity function equally well, but with very different H I fractions. This confirms a degeneracy between the ionization topology of the intergalactic medium (IGM) and the H I fraction inferred from Lyα surveys. We highlight the importance of the clustering of small-scale absorbers around galaxies. A combined analysis of the Lyα luminosity function and the Lyα fraction can break this degeneracy and provide constraints on the reionization history and its topology. Constraints can be improved by analysing the full MUV-dependent redshift evolution of the Lyα fraction of Lyman break galaxies. We find that the IGM-transmission probability distribution function is unimodal for bubble models and bimodal in web models. Comparing our models to observations, we infer that the neutral fraction at z ˜ 7 is likely to be of the order of tens of per cent when interpreted with bubble or web-bubble models, with a conservative lower limit ˜1 per cent when interpreted with web models.
NASA Astrophysics Data System (ADS)
Kenney, M. A.; Mohrig, D.; Hobbs, B. F.; Parker, G.
2011-12-01
Land loss in the Mississippi River Delta caused by subsidence and erosion has resulted in habitat loss, interference with human activities, and increased exposure of New Orleans and other settled areas to storm surge risks. Prior to dam and levee building and oil and gas production in the 20th century, the long term rates of land building roughly balanced land loss through subsidence. Now, however, sediment is being deposited at dramatically lower rates in shallow areas in and adjacent to the Delta, with much of the remaining sediment borne by the Mississippi being lost to the deep areas of the Gulf of Mexico. A few projects have been built in order to divert sediment from the river to areas where land can be built, and many more are under consideration as part of State of Louisiana and Federal planning processes. Most are small scale, although there have been some proposals for large engineered avulsions that would divert a significant fraction of the remaining available sediment (W. Kim, et al. 2009, EOS). However, there is debate over whether small or large diversions are the economically optimally and socially most acceptable size of such land building projects. From an economic point of view, the optimal size involves tradeoffs between scale economies in civil work construction, the relationship between depth of diversion and sediment concentration in river water, effects on navigation, and possible diminishing returns to land building at a single location as the edge of built land progresses into deeper waters. Because land building efforts could potentially involve billions of dollars of investment, it is important to gain as much benefit as possible from those expenditures. We present the result of a general analysis of scale economies in land building from engineered avulsions. The analysis addresses the question: how many projects of what size should be built at what time in order to maximize the amount of land built by a particular time? The analysis integrates three models: 1. coarse sediment diversion as a function of the width, depth, and timing of water diversions (using our field measurements of sediment concentration as a function of depth), 2. land building as a function of the location, water, and amount of sediment diverted, accounting for bathymetry, subsidence, and other factors, and 3. cost of building and operating the necessary civil works. Our statistical analysis of past diversions indicates existence of scale economies in width and scale of diseconomies in depth. The analysis explores general relationships between size, cost, and land building, and does not consider specific actual project proposals or locations. Sensitivity to assumptions about fine sediment capture, accumulation rates for organic material, and other inputs will be discussed.
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents findings from an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents the computer printout of data on the application of discriminant function analysis…
Nowrousian, Minou; Würtz, Christian; Pöggeler, Stefanie; Kück, Ulrich
2004-03-01
One of the most challenging parts of large scale sequencing projects is the identification of functional elements encoded in a genome. Recently, studies of genomes of up to six different Saccharomyces species have demonstrated that a comparative analysis of genome sequences from closely related species is a powerful approach to identify open reading frames and other functional regions within genomes [Science 301 (2003) 71, Nature 423 (2003) 241]. Here, we present a comparison of selected sequences from Sordaria macrospora to their corresponding Neurospora crassa orthologous regions. Our analysis indicates that due to the high degree of sequence similarity and conservation of overall genomic organization, S. macrospora sequence information can be used to simplify the annotation of the N. crassa genome.
Large-scale label-free quantitative proteomics of the pea aphid-Buchnera symbiosis.
Poliakov, Anton; Russell, Calum W; Ponnala, Lalit; Hoops, Harold J; Sun, Qi; Douglas, Angela E; van Wijk, Klaas J
2011-06-01
Many insects are nutritionally dependent on symbiotic microorganisms that have tiny genomes and are housed in specialized host cells called bacteriocytes. The obligate symbiosis between the pea aphid Acyrthosiphon pisum and the γ-proteobacterium Buchnera aphidicola (only 584 predicted proteins) is particularly amenable for molecular analysis because the genomes of both partners have been sequenced. To better define the symbiotic relationship between this aphid and Buchnera, we used large-scale, high accuracy tandem mass spectrometry (nanoLC-LTQ-Orbtrap) to identify aphid and Buchnera proteins in the whole aphid body, purified bacteriocytes, isolated Buchnera cells and the residual bacteriocyte fraction. More than 1900 aphid and 400 Buchnera proteins were identified. All enzymes in amino acid metabolism annotated in the Buchnera genome were detected, reflecting the high (68%) coverage of the proteome and supporting the core function of Buchnera in the aphid symbiosis. Transporters mediating the transport of predicted metabolites were present in the bacteriocyte. Label-free spectral counting combined with hierarchical clustering, allowed to define the quantitative distribution of a subset of these proteins across both symbiotic partners, yielding no evidence for the selective transfer of protein among the partners in either direction. This is the first quantitative proteome analysis of bacteriocyte symbiosis, providing a wealth of information about molecular function of both the host cell and bacterial symbiont.
Demir, E; Babur, O; Dogrusoz, U; Gursoy, A; Nisanci, G; Cetin-Atalay, R; Ozturk, M
2002-07-01
Availability of the sequences of entire genomes shifts the scientific curiosity towards the identification of function of the genomes in large scale as in genome studies. In the near future, data produced about cellular processes at molecular level will accumulate with an accelerating rate as a result of proteomics studies. In this regard, it is essential to develop tools for storing, integrating, accessing, and analyzing this data effectively. We define an ontology for a comprehensive representation of cellular events. The ontology presented here enables integration of fragmented or incomplete pathway information and supports manipulation and incorporation of the stored data, as well as multiple levels of abstraction. Based on this ontology, we present the architecture of an integrated environment named Patika (Pathway Analysis Tool for Integration and Knowledge Acquisition). Patika is composed of a server-side, scalable, object-oriented database and client-side editors to provide an integrated, multi-user environment for visualizing and manipulating network of cellular events. This tool features automated pathway layout, functional computation support, advanced querying and a user-friendly graphical interface. We expect that Patika will be a valuable tool for rapid knowledge acquisition, microarray generated large-scale data interpretation, disease gene identification, and drug development. A prototype of Patika is available upon request from the authors.
Sison, Margarette; Gerlai, Robert
2011-01-01
The zebrafish is gaining popularity in behavioral neuroscience perhaps because of a promise of efficient large scale mutagenesis and drug screens that could identify a substantial number of yet undiscovered molecular players involved in complex traits. Learning and memory are complex functions of the brain and the analysis of their mechanisms may benefit from such large scale zebrafish screens. One bottleneck in this research is the paucity of appropriate behavioral screening paradigms, which may be due to the relatively uncharacterized nature of the behavior of this species. Here we show that zebrafish exhibit good learning performance in a task adapted from the mammalian literature, a plus maze in which zebrafish are required to associate a neutral visual stimulus with the presence of conspecifics, the rewarding unconditioned stimulus. Furthermore, we show that MK-801, a non-competitive NMDA-R antagonist, impairs memory performance in this maze when administered right after training or just before recall but not when given before training at a dose that does not impair motor function, perception or motivation. These results suggest that the plus maze associative learning paradigm has face and construct validity and that zebrafish may become an appropriate and translationally relevant study species for the analysis of the mechanisms of vertebrate, including mammalian, learning and memory. PMID:21596149
ICA model order selection of task co-activation networks.
Ray, Kimberly L; McKay, D Reese; Fox, Peter M; Riedel, Michael C; Uecker, Angela M; Beckmann, Christian F; Smith, Stephen M; Fox, Peter T; Laird, Angela R
2013-01-01
Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders.
ICA model order selection of task co-activation networks
Ray, Kimberly L.; McKay, D. Reese; Fox, Peter M.; Riedel, Michael C.; Uecker, Angela M.; Beckmann, Christian F.; Smith, Stephen M.; Fox, Peter T.; Laird, Angela R.
2013-01-01
Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders. PMID:24339802
Moyer, Jason T.; Halterman, Benjamin L.; Finkel, Leif H.; Wolf, John A.
2014-01-01
Striatal medium spiny neurons (MSNs) receive lateral inhibitory projections from other MSNs and feedforward inhibitory projections from fast-spiking, parvalbumin-containing striatal interneurons (FSIs). The functional roles of these connections are unknown, and difficult to study in an experimental preparation. We therefore investigated the functionality of both lateral (MSN-MSN) and feedforward (FSI-MSN) inhibition using a large-scale computational model of the striatal network. The model consists of 2744 MSNs comprised of 189 compartments each and 121 FSIs comprised of 148 compartments each, with dendrites explicitly represented and almost all known ionic currents included and strictly constrained by biological data as appropriate. Our analysis of the model indicates that both lateral inhibition and feedforward inhibition function at the population level to limit non-ensemble MSN spiking while preserving ensemble MSN spiking. Specifically, lateral inhibition enables large ensembles of MSNs firing synchronously to strongly suppress non-ensemble MSNs over a short time-scale (10–30 ms). Feedforward inhibition enables FSIs to strongly inhibit weakly activated, non-ensemble MSNs while moderately inhibiting activated ensemble MSNs. Importantly, FSIs appear to more effectively inhibit MSNs when FSIs fire asynchronously. Both types of inhibition would increase the signal-to-noise ratio of responding MSN ensembles and contribute to the formation and dissolution of MSN ensembles in the striatal network. PMID:25505406
NASA Astrophysics Data System (ADS)
Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey
2017-04-01
Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value
Ruhí, Albert; Boix, Dani; Gascón, Stéphanie; Sala, Jordi; Batzer, Darold P
2013-01-01
In freshwater ecosystems, species compositions are known to be determined hierarchically by large to small‑scale environmental factors, based on the biological traits of the organisms. However, in ephemeral habitats this heuristic framework remains largely untested. Although temporary wetland faunas are constrained by a local filter (i.e., desiccation), we propose its magnitude may still depend on large-scale climate characteristics. If this is true, climate should be related to the degree of functional and taxonomic relatedness of invertebrate communities inhabiting seasonal wetlands. We tested this hypothesis in two ways. First, based on 52 biological traits for invertebrates, we conducted a case study to explore functional trends among temperate seasonal wetlands differing in the harshness (i.e., dryness) of their dry season. After finding evidence of trait filtering, we addressed whether it could be generalized across a broader climatic scale. To this end, a meta-analysis (225 seasonal wetlands spread across broad climatic categories: Arid, Temperate, and Cold) allowed us to identify whether an equivalent climate-dependent pattern of trait richness was consistent between the Nearctic and the Western Palearctic. Functional overlap of invertebrates increased from mild (i.e., Temperate) to harsher climates (i.e., Arid and Cold), and phylogenetic clustering (using taxonomy as a surrogate) was highest in Arid and lowest in Temperate wetlands. We show that, (i) as has been described in streams, higher relatedness than would be expected by chance is generally observed in seasonal wetland invertebrate communities; and (ii) this relatedness is not constant but climate-dependent, with the climate under which a given seasonal wetland is located determining the functional overlap and the phylogenetic clustering of the community. Finally, using a space-for-time substitution approach we suggest our results may anticipate how the invertebrate biodiversity embedded in these vulnerable and often overlooked ecosystems will be affected by long-term climate change.
Ruhí, Albert; Boix, Dani; Gascón, Stéphanie; Sala, Jordi; Batzer, Darold P.
2013-01-01
In freshwater ecosystems, species compositions are known to be determined hierarchically by large to small‑scale environmental factors, based on the biological traits of the organisms. However, in ephemeral habitats this heuristic framework remains largely untested. Although temporary wetland faunas are constrained by a local filter (i.e., desiccation), we propose its magnitude may still depend on large-scale climate characteristics. If this is true, climate should be related to the degree of functional and taxonomic relatedness of invertebrate communities inhabiting seasonal wetlands. We tested this hypothesis in two ways. First, based on 52 biological traits for invertebrates, we conducted a case study to explore functional trends among temperate seasonal wetlands differing in the harshness (i.e., dryness) of their dry season. After finding evidence of trait filtering, we addressed whether it could be generalized across a broader climatic scale. To this end, a meta-analysis (225 seasonal wetlands spread across broad climatic categories: Arid, Temperate, and Cold) allowed us to identify whether an equivalent climate-dependent pattern of trait richness was consistent between the Nearctic and the Western Palearctic. Functional overlap of invertebrates increased from mild (i.e., Temperate) to harsher climates (i.e., Arid and Cold), and phylogenetic clustering (using taxonomy as a surrogate) was highest in Arid and lowest in Temperate wetlands. We show that, (i) as has been described in streams, higher relatedness than would be expected by chance is generally observed in seasonal wetland invertebrate communities; and (ii) this relatedness is not constant but climate-dependent, with the climate under which a given seasonal wetland is located determining the functional overlap and the phylogenetic clustering of the community. Finally, using a space-for-time substitution approach we suggest our results may anticipate how the invertebrate biodiversity embedded in these vulnerable and often overlooked ecosystems will be affected by long-term climate change. PMID:24312347
Viscous decay of nonlinear oscillations of a spherical bubble at large Reynolds number
NASA Astrophysics Data System (ADS)
Smith, W. R.; Wang, Q. X.
2017-08-01
The long-time viscous decay of large-amplitude bubble oscillations is considered in an incompressible Newtonian fluid, based on the Rayleigh-Plesset equation. At large Reynolds numbers, this is a multi-scaled problem with a short time scale associated with inertial oscillation and a long time scale associated with viscous damping. A multi-scaled perturbation method is thus employed to solve the problem. The leading-order analytical solution of the bubble radius history is obtained to the Rayleigh-Plesset equation in a closed form including both viscous and surface tension effects. Some important formulae are derived including the following: the average energy loss rate of the bubble system during each cycle of oscillation, an explicit formula for the dependence of the oscillation frequency on the energy, and an implicit formula for the amplitude envelope of the bubble radius as a function of the energy. Our theory shows that the energy of the bubble system and the frequency of oscillation do not change on the inertial time scale at leading order, the energy loss rate on the long viscous time scale being inversely proportional to the Reynolds number. These asymptotic predictions remain valid during each cycle of oscillation whether or not compressibility effects are significant. A systematic parametric analysis is carried out using the above formula for the energy of the bubble system, frequency of oscillation, and minimum/maximum bubble radii in terms of the Reynolds number, the dimensionless initial pressure of the bubble gases, and the Weber number. Our results show that the frequency and the decay rate have substantial variations over the lifetime of a decaying oscillation. The results also reveal that large-amplitude bubble oscillations are very sensitive to small changes in the initial conditions through large changes in the phase shift.
InterProScan 5: genome-scale protein function classification
Jones, Philip; Binns, David; Chang, Hsin-Yu; Fraser, Matthew; Li, Weizhong; McAnulla, Craig; McWilliam, Hamish; Maslen, John; Mitchell, Alex; Nuka, Gift; Pesseat, Sebastien; Quinn, Antony F.; Sangrador-Vegas, Amaia; Scheremetjew, Maxim; Yong, Siew-Yit; Lopez, Rodrigo; Hunter, Sarah
2014-01-01
Motivation: Robust large-scale sequence analysis is a major challenge in modern genomic science, where biologists are frequently trying to characterize many millions of sequences. Here, we describe a new Java-based architecture for the widely used protein function prediction software package InterProScan. Developments include improvements and additions to the outputs of the software and the complete reimplementation of the software framework, resulting in a flexible and stable system that is able to use both multiprocessor machines and/or conventional clusters to achieve scalable distributed data analysis. InterProScan is freely available for download from the EMBl-EBI FTP site and the open source code is hosted at Google Code. Availability and implementation: InterProScan is distributed via FTP at ftp://ftp.ebi.ac.uk/pub/software/unix/iprscan/5/ and the source code is available from http://code.google.com/p/interproscan/. Contact: http://www.ebi.ac.uk/support or interhelp@ebi.ac.uk or mitchell@ebi.ac.uk PMID:24451626
Bravini, Elisabetta; Giordano, Andrea; Sartorio, Francesco; Ferriero, Giorgio; Vercelli, Stefano
2017-04-01
To investigate dimensionality and the measurement properties of the Italian Lower Extremity Functional Scale using both classical test theory and Rasch analysis methods, and to provide insights for an improved version of the questionnaire. Rasch analysis of individual patient data. Rehabilitation centre. A total of 135 patients with musculoskeletal diseases of the lower limb. Patients were assessed with the Lower Extremity Functional Scale before and after the rehabilitation. Rasch analysis showed some problems related to rating scale category functioning, items fit, and items redundancy. After an iterative process, which resulted in the reduction of rating scale categories from 5 to 4, and in the deletion of 5 items, the psychometric properties of the Italian Lower Extremity Functional Scale improved. The retained 15 items with a 4-level response format fitted the Rasch model (internal construct validity), and demonstrated unidimensionality and good reliability indices (person-separation reliability 0.92; Cronbach's alpha 0.94). Then, the analysis showed differential item functioning for six of the retained items. The sensitivity to change of the Italian 15-item Lower Extremity Functional Scale was nearly equal to the one of the original version (effect size: 0.93 and 0.98; standardized response mean: 1.20 and 1.28, respectively for the 15-item and 20-item versions). The Italian Lower Extremity Functional Scale had unsatisfactory measurement properties. However, removing five items and simplifying the scoring from 5 to 4 levels resulted in a more valid measure with good reliability and sensitivity to change.
Williams, Leanne M
2016-01-01
Complex emotional, cognitive and self-reflective functions rely on the activation and connectivity of large-scale neural circuits. These circuits offer a relevant scale of focus for conceptualizing a taxonomy for depression and anxiety based on specific profiles (or biotypes) of neural circuit dysfunction. Here, the theoretical review first outlined the current consensus as to what constitutes the organization of large-scale circuits in the human brain identified using parcellation and meta-analysis. The focus is on neural circuits implicated in resting reflection (“default mode”), detection of “salience”, affective processing (“threat” and “reward”), “attention” and “cognitive control”. Next, the current evidence regarding which type of dysfunctions in these circuits characterize depression and anxiety disorders was reviewed, with an emphasis on published meta-analyses and reviews of circuit dysfunctions that have been identified in at least two well-powered case:control studies. Grounded in the review of these topics, a conceptual framework is proposed for considering neural circuit-defined “biotypes”. In this framework, biotypes are defined by profiles of extent of dysfunction on each large-scale circuit. The clinical implications of a biotype approach for guiding classification and treatment of depression and anxiety is considered. Future research directions will develop the validity and clinical utility of a neural circuit biotype model that spans diagnostic categories and helps to translate neuroscience into clinical practice in the real world. PMID:27653321
Neural Systems Underlying Individual Differences in Intertemporal Decision-making.
Elton, Amanda; Smith, Christopher T; Parrish, Michael H; Boettiger, Charlotte A
2017-03-01
Excessively choosing immediate over larger future rewards, or delay discounting (DD), associates with multiple clinical conditions. Individual differences in DD likely depend on variations in the activation of and functional interactions between networks, representing possible endophenotypes for associated disorders, including alcohol use disorders (AUDs). Numerous fMRI studies have probed the neural bases of DD, but investigations of large-scale networks remain scant. We addressed this gap by testing whether activation within large-scale networks during Now/Later decision-making predicts individual differences in DD. To do so, we scanned 95 social drinkers (18-40 years old; 50 women) using fMRI during hypothetical choices between small monetary amounts available "today" or larger amounts available later. We identified neural networks engaged during Now/Later choice using independent component analysis and tested the relationship between component activation and degree of DD. The activity of two components during Now/Later choice correlated with individual DD rates: A temporal lobe network positively correlated with DD, whereas a frontoparietal-striatal network negatively correlated with DD. Activation differences between these networks predicted individual differences in DD, and their negative correlation during Now/Later choice suggests functional competition. A generalized psychophysiological interactions analysis confirmed a decrease in their functional connectivity during decision-making. The functional connectivity of these two networks negatively correlates with alcohol-related harm, potentially implicating these networks in AUDs. These findings provide novel insight into the neural underpinnings of individual differences in impulsive decision-making with potential implications for addiction and related disorders in which impulsivity is a defining feature.
Spatial correlations, clustering and percolation-like transitions in homicide crimes
NASA Astrophysics Data System (ADS)
Alves, L. G. A.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.
2015-07-01
The spatial dynamics of criminal activities has been recently studied through statistical physics methods; however, models and results have been focusing on local scales (city level) and much less is known about these patterns at larger scales, e.g. at a country level. Here we report on a characterization of the spatial dynamics of the homicide crimes along the Brazilian territory using data from all cities (˜5000) in a period of more than thirty years. Our results show that the spatial correlation function in the per capita homicides decays exponentially with the distance between cities and that the characteristic correlation length displays an acute increasing trend in the latest years. We also investigate the formation of spatial clusters of cities via a percolation-like analysis, where clustering of cities and a phase-transition-like behavior describing the size of the largest cluster as a function of a homicide threshold are observed. This transition-like behavior presents evolutive features characterized by an increasing in the homicide threshold (where the transitions occur) and by a decreasing in the transition magnitudes (length of the jumps in the cluster size). We believe that our work sheds new light on the spatial patterns of criminal activities at large scales, which may contribute for better political decisions and resources allocation as well as opens new possibilities for modeling criminal activities by setting up fundamental empirical patterns at large scales.
Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-28
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
NASA Astrophysics Data System (ADS)
Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-01
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
Wang, Jinan; Shao, Qiang; Xu, Zhijian; Liu, Yingtao; Yang, Zhuo; Cossins, Benjamin P; Jiang, Hualiang; Chen, Kaixian; Shi, Jiye; Zhu, Weiliang
2014-01-09
Large-scale conformational changes of proteins are usually associated with the binding of ligands. Because the conformational changes are often related to the biological functions of proteins, understanding the molecular mechanisms of these motions and the effects of ligand binding becomes very necessary. In the present study, we use the combination of normal-mode analysis and umbrella sampling molecular dynamics simulation to delineate the atomically detailed conformational transition pathways and the associated free-energy landscapes for three well-known protein systems, viz., adenylate kinase (AdK), calmodulin (CaM), and p38α kinase in the absence and presence of respective ligands. For each protein under study, the transient conformations along the conformational transition pathway and thermodynamic observables are in agreement with experimentally and computationally determined ones. The calculated free-energy profiles reveal that AdK and CaM are intrinsically flexible in structures without obvious energy barrier, and their ligand binding shifts the equilibrium from the ligand-free to ligand-bound conformation (population shift mechanism). In contrast, the ligand binding to p38α leads to a large change in free-energy barrier (ΔΔG ≈ 7 kcal/mol), promoting the transition from DFG-in to DFG-out conformation (induced fit mechanism). Moreover, the effect of the protonation of D168 on the conformational change of p38α is also studied, which reduces the free-energy difference between the two functional states of p38α and thus further facilitates the conformational interconversion. Therefore, the present study suggests that the detailed mechanism of ligand binding and the associated conformational transition is not uniform for all kinds of proteins but correlated to their respective biological functions.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Barbero, Renaud; Loriaux, Jessica; Fowler, Hayley
2017-04-01
Present-day precipitation-temperature scaling relations indicate that hourly precipitation extremes may have a response to warming exceeding the Clausius-Clapeyron (CC) relation; for The Netherlands the dependency on surface dew point temperature follows two times the CC relation corresponding to 14 % per degree. Our hypothesis - as supported by a simple physical argument presented here - is that this 2CC behaviour arises from the physics of convective clouds. So, we think that this response is due to local feedbacks related to the convective activity, while other large scale atmospheric forcing conditions remain similar except for the higher temperature (approximately uniform warming with height) and absolute humidity (corresponding to the assumption of unchanged relative humidity). To test this hypothesis, we analysed the large-scale atmospheric conditions accompanying summertime afternoon precipitation events using surface observations combined with a regional re-analysis for the data in The Netherlands. Events are precipitation measurements clustered in time and space derived from approximately 30 automatic weather stations. The hourly peak intensities of these events again reveal a 2CC scaling with the surface dew point temperature. The temperature excess of moist updrafts initialized at the surface and the maximum cloud depth are clear functions of surface dew point temperature, confirming the key role of surface humidity on convective activity. Almost no differences in relative humidity and the dry temperature lapse rate were found across the dew point temperature range, supporting our theory that 2CC scaling is mainly due to the response of convection to increases in near surface humidity, while other atmospheric conditions remain similar. Additionally, hourly precipitation extremes are on average accompanied by substantial large-scale upward motions and therefore large-scale moisture convergence, which appears to accelerate with surface dew point. This increase in large-scale moisture convergence appears to be consequence of latent heat release due to the convective activity as estimated from the quasi-geostrophic omega equation. Consequently, most hourly extremes occur in precipitation events with considerable spatial extent. Importantly, this event size appears to increase rapidly at the highest dew point temperature range, suggesting potentially strong impacts of climatic warming.
Understanding the origins of uncertainty in landscape-scale variations of emissions of nitrous oxide
NASA Astrophysics Data System (ADS)
Milne, Alice; Haskard, Kathy; Webster, Colin; Truan, Imogen; Goulding, Keith
2014-05-01
Nitrous oxide is a potent greenhouse gas which is over 300 times more radiatively effective than carbon dioxide. In the UK, the agricultural sector is estimated to be responsible for over 80% of nitrous oxide emissions, with these emissions resulting from livestock and farmers adding nitrogen fertilizer to soils. For the purposes of reporting emissions to the IPCC, the estimates are calculated using simple models whereby readily-available national or international statistics are combined with IPCC default emission factors. The IPCC emission factor for direct emissions of nitrous oxide from soils has a very large uncertainty. This is primarily because the variability of nitrous oxide emissions in space is large and this results in uncertainty that may be regarded as sample noise. To both reduce uncertainty through improved modelling, and to communicate an understanding of this uncertainty, we must understand the origins of the variation. We analysed data on nitrous oxide emission rate and some other soil properties collected from a 7.5-km transect across contrasting land uses and parent materials in eastern England. We investigated the scale-dependence and spatial uniformity of the correlations between soil properties and emission rates from farm to landscape scale using wavelet analysis. The analysis revealed a complex pattern of scale-dependence. Emission rates were strongly correlated with a process-specific function of the water-filled pore space at the coarsest scale and nitrate at intermediate and coarsest scales. We also found significant correlations between pH and emission rates at the intermediate scales. The wavelet analysis showed that these correlations were not spatially uniform and that at certain scales changes in parent material coincided with significant changes in correlation. Our results indicate that, at the landscape scale, nitrate content and water-filled pore space are key soil properties for predicting nitrous oxide emissions and should therefore be incorporated into process models and emission factors for inventory calculations.
2009-01-01
Background Insertional mutagenesis is an effective method for functional genomic studies in various organisms. It can rapidly generate easily tractable mutations. A large-scale insertional mutagenesis with the piggyBac (PB) transposon is currently performed in mice at the Institute of Developmental Biology and Molecular Medicine (IDM), Fudan University in Shanghai, China. This project is carried out via collaborations among multiple groups overseeing interconnected experimental steps and generates a large volume of experimental data continuously. Therefore, the project calls for an efficient database system for recording, management, statistical analysis, and information exchange. Results This paper presents a database application called MP-PBmice (insertional mutation mapping system of PB Mutagenesis Information Center), which is developed to serve the on-going large-scale PB insertional mutagenesis project. A lightweight enterprise-level development framework Struts-Spring-Hibernate is used here to ensure constructive and flexible support to the application. The MP-PBmice database system has three major features: strict access-control, efficient workflow control, and good expandability. It supports the collaboration among different groups that enter data and exchange information on daily basis, and is capable of providing real time progress reports for the whole project. MP-PBmice can be easily adapted for other large-scale insertional mutation mapping projects and the source code of this software is freely available at http://www.idmshanghai.cn/PBmice. Conclusion MP-PBmice is a web-based application for large-scale insertional mutation mapping onto the mouse genome, implemented with the widely used framework Struts-Spring-Hibernate. This system is already in use by the on-going genome-wide PB insertional mutation mapping project at IDM, Fudan University. PMID:19958505
Huang, Pengyun; Lin, Fucheng
2014-01-01
Because of great challenges and workload in deleting genes on a large scale, the functions of most genes in pathogenic fungi are still unclear. In this study, we developed a high-throughput gene knockout system using a novel yeast-Escherichia-Agrobacterium shuttle vector, pKO1B, in the rice blast fungus Magnaporthe oryzae. Using this method, we deleted 104 fungal-specific Zn2Cys6 transcription factor (TF) genes in M. oryzae. We then analyzed the phenotypes of these mutants with regard to growth, asexual and infection-related development, pathogenesis, and 9 abiotic stresses. The resulting data provide new insights into how this rice pathogen of global significance regulates important traits in the infection cycle through Zn2Cys6TF genes. A large variation in biological functions of Zn2Cys6TF genes was observed under the conditions tested. Sixty-one of 104 Zn2Cys6 TF genes were found to be required for fungal development. In-depth analysis of TF genes revealed that TF genes involved in pathogenicity frequently tend to function in multiple development stages, and disclosed many highly conserved but unidentified functional TF genes of importance in the fungal kingdom. We further found that the virulence-required TF genes GPF1 and CNF2 have similar regulation mechanisms in the gene expression involved in pathogenicity. These experimental validations clearly demonstrated the value of a high-throughput gene knockout system in understanding the biological functions of genes on a genome scale in fungi, and provided a solid foundation for elucidating the gene expression network that regulates the development and pathogenicity of M. oryzae. PMID:25299517
Disruption of posteromedial large-scale neural communication predicts recovery from coma.
Silva, Stein; de Pasquale, Francesco; Vuillaume, Corine; Riu, Beatrice; Loubinoux, Isabelle; Geeraerts, Thomas; Seguin, Thierry; Bounes, Vincent; Fourcade, Olivier; Demonet, Jean-Francois; Péran, Patrice
2015-12-08
We hypothesize that the major consciousness deficit observed in coma is due to the breakdown of long-range neuronal communication supported by precuneus and posterior cingulate cortex (PCC), and that prognosis depends on a specific connectivity pattern in these networks. We compared 27 prospectively recruited comatose patients who had severe brain injury (Glasgow Coma Scale score <8; 14 traumatic and 13 anoxic cases) with 14 age-matched healthy participants. Standardized clinical assessment and fMRI were performed on average 4 ± 2 days after withdrawal of sedation. Analysis of resting-state fMRI connectivity involved a hypothesis-driven, region of interest-based strategy. We assessed patient outcome after 3 months using the Coma Recovery Scale-Revised (CRS-R). Patients who were comatose showed a significant disruption of functional connectivity of brain areas spontaneously synchronized with PCC, globally notwithstanding etiology. The functional connectivity strength between PCC and medial prefrontal cortex (mPFC) was significantly different between comatose patients who went on to recover and those who eventually scored an unfavorable outcome 3 months after brain injury (Kruskal-Wallis test, p < 0.001; linear regression between CRS-R and PCC-mPFC activity coupling at rest, Spearman ρ = 0.93, p < 0.003). In both etiology groups (traumatic and anoxic), changes in the connectivity of PCC-centered, spontaneously synchronized, large-scale networks account for the loss of external and internal self-centered awareness observed during coma. Sparing of functional connectivity between PCC and mPFC may predict patient outcome, and further studies are needed to substantiate this potential prognosis biomarker. © 2015 American Academy of Neurology.
Chambers, Jeffrey Q; Negron-Juarez, Robinson I; Marra, Daniel Magnabosco; Di Vittorio, Alan; Tews, Joerg; Roberts, Dar; Ribeiro, Gabriel H P M; Trumbore, Susan E; Higuchi, Niro
2013-03-05
Old-growth forest ecosystems comprise a mosaic of patches in different successional stages, with the fraction of the landscape in any particular state relatively constant over large temporal and spatial scales. The size distribution and return frequency of disturbance events, and subsequent recovery processes, determine to a large extent the spatial scale over which this old-growth steady state develops. Here, we characterize this mosaic for a Central Amazon forest by integrating field plot data, remote sensing disturbance probability distribution functions, and individual-based simulation modeling. Results demonstrate that a steady state of patches of varying successional age occurs over a relatively large spatial scale, with important implications for detecting temporal trends on plots that sample a small fraction of the landscape. Long highly significant stochastic runs averaging 1.0 Mg biomass⋅ha(-1)⋅y(-1) were often punctuated by episodic disturbance events, resulting in a sawtooth time series of hectare-scale tree biomass. To maximize the detection of temporal trends for this Central Amazon site (e.g., driven by CO2 fertilization), plots larger than 10 ha would provide the greatest sensitivity. A model-based analysis of fractional mortality across all gap sizes demonstrated that 9.1-16.9% of tree mortality was missing from plot-based approaches, underscoring the need to combine plot and remote-sensing methods for estimating net landscape carbon balance. Old-growth tropical forests can exhibit complex large-scale structure driven by disturbance and recovery cycles, with ecosystem and community attributes of hectare-scale plots exhibiting continuous dynamic departures from a steady-state condition.
Demonstration-Scale High-Cell-Density Fermentation of Pichia pastoris.
Liu, Wan-Cang; Zhu, Ping
2018-01-01
Pichia pastoris has been one of the most successful heterologous overexpression systems in generating proteins for large-scale production through high-cell-density fermentation. However, optimizing conditions of the large-scale high-cell-density fermentation for biochemistry and industrialization is usually a laborious and time-consuming process. Furthermore, it is often difficult to produce authentic proteins in large quantities, which is a major obstacle for functional and structural features analysis and industrial application. For these reasons, we have developed a protocol for efficient demonstration-scale high-cell-density fermentation of P. pastoris, which employs a new methanol-feeding strategy-biomass-stat strategy and a strategy of increased air pressure instead of pure oxygen supplement. The protocol included three typical stages of glycerol batch fermentation (initial culture phase), glycerol fed-batch fermentation (biomass accumulation phase), and methanol fed-batch fermentation (induction phase), which allows direct online-monitoring of fermentation conditions, including broth pH, temperature, DO, anti-foam generation, and feeding of glycerol and methanol. Using this protocol, production of the recombinant β-xylosidase of Lentinula edodes origin in 1000-L scale fermentation can be up to ~900 mg/L or 9.4 mg/g cells (dry cell weight, intracellular expression), with the specific production rate and average specific production of 0.1 mg/g/h and 0.081 mg/g/h, respectively. The methodology described in this protocol can be easily transferred to other systems, and eligible to scale up for a large number of proteins used in either the scientific studies or commercial purposes.
A space-time multifractal analysis on radar rainfall sequences from central Poland
NASA Astrophysics Data System (ADS)
Licznar, Paweł; Deidda, Roberto
2014-05-01
Rainfall downscaling belongs to most important tasks of modern hydrology. Especially from the perspective of urban hydrology there is real need for development of practical tools for possible rainfall scenarios generation. Rainfall scenarios of fine temporal scale reaching single minutes are indispensable as inputs for hydrological models. Assumption of probabilistic philosophy of drainage systems design and functioning leads to widespread application of hydrodynamic models in engineering practice. However models like these covering large areas could not be supplied with only uncorrelated point-rainfall time series. They should be rather supplied with space time rainfall scenarios displaying statistical properties of local natural rainfall fields. Implementation of a Space-Time Rainfall (STRAIN) model for hydrometeorological applications in Polish conditions, such as rainfall downscaling from the large scales of meteorological models to the scale of interest for rainfall-runoff processes is the long-distance aim of our research. As an introduction part of our study we verify the veracity of the following STRAIN model assumptions: rainfall fields are isotropic and statistically homogeneous in space; self-similarity holds (so that, after having rescaled the time by the advection velocity, rainfall is a fully homogeneous and isotropic process in the space-time domain); statistical properties of rainfall are characterized by an "a priori" known multifractal behavior. We conduct a space-time multifractal analysis on radar rainfall sequences selected from the Polish national radar system POLRAD. Radar rainfall sequences covering the area of 256 km x 256 km of original 2 km x 2 km spatial resolution and 15 minutes temporal resolution are used as study material. Attention is mainly focused on most severe summer convective rainfalls. It is shown that space-time rainfall can be considered with a good approximation to be a self-similar multifractal process. Multifractal analysis is carried out assuming Taylor's hypothesis to hold and the advection velocity needed to rescale the time dimension is assumed to be equal about 16 km/h. This assumption is verified by the analysis of autocorrelation functions along the x and y directions of "rainfall cubes" and along the time axis rescaled with assumed advection velocity. In general for analyzed rainfall sequences scaling is observed for spatial scales ranging from 4 to 256 km and for timescales from 15 min to 16 hours. However in most cases scaling break is identified for spatial scales between 4 and 8, corresponding to spatial dimensions of 16 km to 32 km. It is assumed that the scaling break occurrence at these particular scales in central Poland conditions could be at least partly explained by the rainfall mesoscale gap (on the edge of meso-gamma, storm-scale and meso-beta scale).
Liu, Zhongming; de Zwart, Jacco A.; Chang, Catie; Duan, Qi; van Gelderen, Peter; Duyn, Jeff H.
2014-01-01
Spontaneous activity in the human brain occurs in complex spatiotemporal patterns that may reflect functionally specialized neural networks. Here, we propose a subspace analysis method to elucidate large-scale networks by the joint analysis of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data. The new approach is based on the notion that the neuroelectrical activity underlying the fMRI signal may have EEG spectral features that report on regional neuronal dynamics and interregional interactions. Applying this approach to resting healthy adults, we indeed found characteristic spectral signatures in the EEG correlates of spontaneous fMRI signals at individual brain regions as well as the temporal synchronization among widely distributed regions. These spectral signatures not only allowed us to parcel the brain into clusters that resembled the brain's established functional subdivision, but also offered important clues for disentangling the involvement of individual regions in fMRI network activity. PMID:23796947
Highly efficient model updating for structural condition assessment of large-scale bridges.
DOT National Transportation Integrated Search
2015-02-01
For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...
Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales
Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.
2014-01-01
Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949
Assessing the importance of internal tide scattering in the deep ocean
NASA Astrophysics Data System (ADS)
Haji, Maha; Peacock, Thomas; Carter, Glenn; Johnston, T. M. Shaun
2014-11-01
Tides are one of the main sources of energy input to the deep ocean, and the pathways of energy transfer from barotropic tides to turbulent mixing scales via internal tides are not well understood. Large-scale (low-mode) internal tides account for the bulk of energy extracted from barotropic tides and have been observed to propagate over 1000 km from their generation sites. We seek to examine the fate of these large-scale internal tides and the processes by which their energy is transferred, or ``scattered,'' to small-scale (high-mode) internal tides, which dissipate locally and are responsible for internal tide driven mixing. The EXperiment on Internal Tide Scattering (EXITS) field study conducted in 2010-2011 sought to examine the role of topographic scattering at the Line Islands Ridge. The scattering process was examined via data from three moorings equipped with moored profilers, spanning total depths of 3000--5000 m. The results of our field data analysis are rationalized via comparison to data from two- and three-dimensional numerical models and a two-dimensional analytical model based on Green function theory.
A large-scale evaluation of computational protein function prediction
Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-01-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650
Honeycomb: Visual Analysis of Large Scale Social Networks
NASA Astrophysics Data System (ADS)
van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.
The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.
NASA Astrophysics Data System (ADS)
Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O. P.; Butala, M.; Mannucci, A. J.
2014-12-01
In this research, we report intermediate scale plasma density irregularities in the high-latitude ionosphere inferred from high-resolution radio occultation (RO) measurements in the CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) - GPS (Global Positioning System) satellites radio link. The high inclination of the CASSIOPE satellite and high rate of signal receptionby the occultation antenna of the GPS Attitude, Positioning and Profiling (GAP) instrument on the Enhanced Polar Outflow Probe platform on CASSIOPE enable a high temporal and spatial resolution investigation of the dynamics of the polar ionosphere, magnetosphere-ionospherecoupling, solar wind effects, etc. with unprecedented details compared to that possible in the past. We have carried out high spatial resolution analysis in altitude and geomagnetic latitude of scintillation-producing plasma density irregularities in the polar ionosphere. Intermediate scale, scintillation-producing plasma density irregularities, which corresponds to 2 to 40 km spatial scales were inferred by applying multi-scale spectral analysis on the RO phase delay measurements. Using our multi-scale spectral analysis approach and Polar Operational Environmental Satellites (POES) and Defense Meteorological Satellite Program (DMSP) observations, we infer that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap regions. In specific terms, we found that large length scales and and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap region. Hence, the irregularity scales and phase scintillation characteristics are a function of the solar wind and the magnetospheric forcing. Multi-scale analysis may become a powerful diagnostic tool for characterizing how the ionosphere is dynamically driven by these factors.
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Weissenberger, S.; Cuk, S. M.
1973-01-01
This report presents the development and description of the decomposition aggregation approach to stability investigations of high dimension mathematical models of dynamic systems. The high dimension vector differential equation describing a large dynamic system is decomposed into a number of lower dimension vector differential equations which represent interconnected subsystems. Then a method is described by which the stability properties of each subsystem are aggregated into a single vector Liapunov function, representing the aggregate system model, consisting of subsystem Liapunov functions as components. A linear vector differential inequality is then formed in terms of the vector Liapunov function. The matrix of the model, which reflects the stability properties of the subsystems and the nature of their interconnections, is analyzed to conclude over-all system stability characteristics. The technique is applied in detail to investigate the stability characteristics of a dynamic model of a hypothetical spinning Skylab.
How much does a tokamak reactor cost?
NASA Astrophysics Data System (ADS)
Freidberg, J.; Cerfon, A.; Ballinger, S.; Barber, J.; Dogra, A.; McCarthy, W.; Milanese, L.; Mouratidis, T.; Redman, W.; Sandberg, A.; Segal, D.; Simpson, R.; Sorensen, C.; Zhou, M.
2017-10-01
The cost of a fusion reactor is of critical importance to its ultimate acceptability as a commercial source of electricity. While there are general rules of thumb for scaling both overnight cost and levelized cost of electricity the corresponding relations are not very accurate or universally agreed upon. We have carried out a series of scaling studies of tokamak reactor costs based on reasonably sophisticated plasma and engineering models. The analysis is largely analytic, requiring only a simple numerical code, thus allowing a very large number of designs. Importantly, the studies are aimed at plasma physicists rather than fusion engineers. The goals are to assess the pros and cons of steady state burning plasma experiments and reactors. One specific set of results discusses the benefits of higher magnetic fields, now possible because of the recent development of high T rare earth superconductors (REBCO); with this goal in mind, we calculate quantitative expressions, including both scaling and multiplicative constants, for cost and major radius as a function of central magnetic field.
Statistical analysis of Hasegawa-Wakatani turbulence
NASA Astrophysics Data System (ADS)
Anderson, Johan; Hnat, Bogdan
2017-06-01
Resistive drift wave turbulence is a multipurpose paradigm that can be used to understand transport at the edge of fusion devices. The Hasegawa-Wakatani model captures the essential physics of drift turbulence while retaining the simplicity needed to gain a qualitative understanding of this process. We provide a theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent events in Hasegawa-Wakatani turbulence with enforced equipartition of energy in large scale zonal flows, and small scale drift turbulence. We find that for a wide range of adiabatic index values, the stochastic component representing the small scale turbulent eddies of the flow, obtained from the autoregressive integrated moving average model, exhibits super-diffusive statistics, consistent with intermittent transport. The PDFs of large events (above one standard deviation) are well approximated by the Laplace distribution, while small events often exhibit a Gaussian character. Furthermore, there exists a strong influence of zonal flows, for example, via shearing and then viscous dissipation maintaining a sub-diffusive character of the fluxes.
Morra, Giulia; Potestio, Raffaello; Micheletti, Cristian; Colombo, Giorgio
2012-01-01
Understanding how local protein modifications, such as binding small-molecule ligands, can trigger and regulate large-scale motions of large protein domains is a major open issue in molecular biology. We address various aspects of this problem by analyzing and comparing atomistic simulations of Hsp90 family representatives for which crystal structures of the full length protein are available: mammalian Grp94, yeast Hsp90 and E.coli HtpG. These chaperones are studied in complex with the natural ligands ATP, ADP and in the Apo state. Common key aspects of their functional dynamics are elucidated with a novel multi-scale comparison of their internal dynamics. Starting from the atomic resolution investigation of internal fluctuations and geometric strain patterns, a novel analysis of domain dynamics is developed. The results reveal that the ligand-dependent structural modulations mostly consist of relative rigid-like movements of a limited number of quasi-rigid domains, shared by the three proteins. Two common primary hinges for such movements are identified. The first hinge, whose functional role has been demonstrated by several experimental approaches, is located at the boundary between the N-terminal and Middle-domains. The second hinge is located at the end of a three-helix bundle in the Middle-domain and unfolds/unpacks going from the ATP- to the ADP-state. This latter site could represent a promising novel druggable allosteric site common to all chaperones. PMID:22457611
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Performance of Grey Wolf Optimizer on large scale problems
NASA Astrophysics Data System (ADS)
Gupta, Shubham; Deep, Kusum
2017-01-01
For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.
FunRich proteomics software analysis, let the fun begin!
Benito-Martin, Alberto; Peinado, Héctor
2015-08-01
Protein MS analysis is the preferred method for unbiased protein identification. It is normally applied to a large number of both small-scale and high-throughput studies. However, user-friendly computational tools for protein analysis are still needed. In this issue, Mathivanan and colleagues (Proteomics 2015, 15, 2597-2601) report the development of FunRich software, an open-access software that facilitates the analysis of proteomics data, providing tools for functional enrichment and interaction network analysis of genes and proteins. FunRich is a reinterpretation of proteomic software, a standalone tool combining ease of use with customizable databases, free access, and graphical representations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Large-scale prediction of ADAR-mediated effective human A-to-I RNA editing.
Yao, Li; Wang, Heming; Song, Yuanyuan; Dai, Zhen; Yu, Hao; Yin, Ming; Wang, Dongxu; Yang, Xin; Wang, Jinlin; Wang, Tiedong; Cao, Nan; Zhu, Jimin; Shen, Xizhong; Song, Guangqi; Zhao, Yicheng
2017-08-10
Adenosine-to-inosine (A-to-I) editing by adenosine deaminase acting on the RNA (ADAR) proteins is one of the most frequent modifications during post- and co-transcription. To facilitate the assignment of biological functions to specific editing sites, we designed an automatic online platform to annotate A-to-I RNA editing sites in pre-mRNA splicing signals, microRNAs (miRNAs) and miRNA target untranslated regions (3' UTRs) from human (Homo sapiens) high-throughput sequencing data and predict their effects based on large-scale bioinformatic analysis. After analysing plenty of previously reported RNA editing events and human normal tissues RNA high-seq data, >60 000 potentially effective RNA editing events on functional genes were found. The RNA Editing Plus platform is available for free at https://www.rnaeditplus.org/, and we believe our platform governing multiple optimized methods will improve further studies of A-to-I-induced editing post-transcriptional regulation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Benthem, Mark H.
2016-05-04
This software is employed for 3D visualization of X-ray diffraction (XRD) data with functionality for slicing, reorienting, isolating and plotting of 2D color contour maps and 3D renderings of large datasets. The program makes use of the multidimensionality of textured XRD data where diffracted intensity is not constant over a given set of angular positions (as dictated by the three defined dimensional angles of phi, chi, and two-theta). Datasets are rendered in 3D with intensity as a scaler which is represented as a rainbow color scale. A GUI interface and scrolling tools along with interactive function via the mouse allowmore » for fast manipulation of these large datasets so as to perform detailed analysis of diffraction results with full dimensionality of the diffraction space.« less
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Integrated analysis of germline and somatic variants in ovarian cancer.
Kanchi, Krishna L; Johnson, Kimberly J; Lu, Charles; McLellan, Michael D; Leiserson, Mark D M; Wendl, Michael C; Zhang, Qunyuan; Koboldt, Daniel C; Xie, Mingchao; Kandoth, Cyriac; McMichael, Joshua F; Wyczalkowski, Matthew A; Larson, David E; Schmidt, Heather K; Miller, Christopher A; Fulton, Robert S; Spellman, Paul T; Mardis, Elaine R; Druley, Todd E; Graubert, Timothy A; Goodfellow, Paul J; Raphael, Benjamin J; Wilson, Richard K; Ding, Li
2014-01-01
We report the first large-scale exome-wide analysis of the combined germline-somatic landscape in ovarian cancer. Here we analyse germline and somatic alterations in 429 ovarian carcinoma cases and 557 controls. We identify 3,635 high confidence, rare truncation and 22,953 missense variants with predicted functional impact. We find germline truncation variants and large deletions across Fanconi pathway genes in 20% of cases. Enrichment of rare truncations is shown in BRCA1, BRCA2 and PALB2. In addition, we observe germline truncation variants in genes not previously associated with ovarian cancer susceptibility (NF1, MAP3K4, CDKN2B and MLL3). Evidence for loss of heterozygosity was found in 100 and 76% of cases with germline BRCA1 and BRCA2 truncations, respectively. Germline-somatic interaction analysis combined with extensive bioinformatics annotation identifies 222 candidate functional germline truncation and missense variants, including two pathogenic BRCA1 and 1 TP53 deleterious variants. Finally, integrated analyses of germline and somatic variants identify significantly altered pathways, including the Fanconi, MAPK and MLL pathways.
Field-aligned currents' scale analysis performed with the Swarm constellation
NASA Astrophysics Data System (ADS)
Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter
2015-01-01
We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.
Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis
ERIC Educational Resources Information Center
Chow, Kui Foon; Kennedy, Kerry John
2014-01-01
International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…
Decoupling processes and scales of shoreline morphodynamics
Hapke, Cheryl J.; Plant, Nathaniel G.; Henderson, Rachel E.; Schwab, William C.; Nelson, Timothy R.
2016-01-01
Behavior of coastal systems on time scales ranging from single storm events to years and decades is controlled by both small-scale sediment transport processes and large-scale geologic, oceanographic, and morphologic processes. Improved understanding of coastal behavior at multiple time scales is required for refining models that predict potential erosion hazards and for coastal management planning and decision-making. Here we investigate the primary controls on shoreline response along a geologically-variable barrier island on time scales resolving extreme storms and decadal variations over a period of nearly one century. An empirical orthogonal function analysis is applied to a time series of shoreline positions at Fire Island, NY to identify patterns of shoreline variance along the length of the island. We establish that there are separable patterns of shoreline behavior that represent response to oceanographic forcing as well as patterns that are not explained by this forcing. The dominant shoreline behavior occurs over large length scales in the form of alternating episodes of shoreline retreat and advance, presumably in response to storms cycles. Two secondary responses include long-term response that is correlated to known geologic variations of the island and the other reflects geomorphic patterns with medium length scale. Our study also includes the response to Hurricane Sandy and a period of post-storm recovery. It was expected that the impacts from Hurricane Sandy would disrupt long-term trends and spatial patterns. We found that the response to Sandy at Fire Island is not notable or distinguishable from several other large storms of the prior decade.
Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method
NASA Astrophysics Data System (ADS)
Kuai, Ken Z.; Tsai, Christina W.
2012-02-01
SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.
Wavelet-based multiscale window transform and energy and vorticity analysis
NASA Astrophysics Data System (ADS)
Liang, Xiang San
A new methodology, Multiscale Energy and Vorticity Analysis (MS-EVA), is developed to investigate sub-mesoscale, meso-scale, and large-scale dynamical interactions in geophysical fluid flows which are intermittent in space and time. The development begins with the construction of a wavelet-based functional analysis tool, the multiscale window transform (MWT), which is local, orthonormal, self-similar, and windowed on scale. The MWT is first built over the real line then modified onto a finite domain. Properties are explored, the most important one being the property of marginalization which brings together a quadratic quantity in physical space with its phase space representation. Based on MWT the MS-EVA is developed. Energy and enstrophy equations for the large-, meso-, and sub-meso-scale windows are derived and their terms interpreted. The processes thus represented are classified into four categories: transport; transfer, conversion, and dissipation/diffusion. The separation of transport from transfer is made possible with the introduction of the concept of perfect transfer. By the property of marginalization, the classical energetic analysis proves to be a particular case of the MS-EVA. The MS-EVA developed is validated with classical instability problems. The validation is carried out through two steps. First, it is established that the barotropic and baroclinic instabilities are indicated by the spatial averages of certain transfer term interaction analyses. Then calculations of these indicators are made with an Eady model and a Kuo model. The results agree precisely with what is expected from their analytical solutions, and the energetics reproduced reveal a consistent and important aspect of the unknown dynamic structures of instability processes. As an application, the MS-EVA is used to investigate the Iceland-Faeroe frontal (IFF) variability. A MS-EVA-ready dataset is first generated, through a forecasting study with the Harvard Ocean Prediction System using the data gathered during the 1993 NRV Alliance cruise. The application starts with a determination of the scale window bounds, which characterize a double-peak structure in either the time wavelet spectrum or the space wavelet spectrum. The resulting energetics, when locally averaged, reveal that there is a clear baroclinic instability happening around the cold tongue intrusion observed in the forecast. Moreover, an interaction analysis shows that the energy released by the instability indeed goes to the meso-scale window and fuel the growth of the intrusion. The sensitivity study shows that, in this case, the key to a successful application is a correct decomposition of the large-scale window from the meso-scale window.
Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy
2008-01-01
Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies. PMID:19137113
NASA Astrophysics Data System (ADS)
Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe
2017-06-01
Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p
NASA Astrophysics Data System (ADS)
Tarpin, Malo; Canet, Léonie; Wschebor, Nicolás
2018-05-01
In this paper, we present theoretical results on the statistical properties of stationary, homogeneous, and isotropic turbulence in incompressible flows in three dimensions. Within the framework of the non-perturbative renormalization group, we derive a closed renormalization flow equation for a generic n-point correlation (and response) function for large wave-numbers with respect to the inverse integral scale. The closure is obtained from a controlled expansion and relies on extended symmetries of the Navier-Stokes field theory. It yields the exact leading behavior of the flow equation at large wave-numbers |p→ i| and for arbitrary time differences ti in the stationary state. Furthermore, we obtain the form of the general solution of the corresponding fixed point equation, which yields the analytical form of the leading wave-number and time dependence of n-point correlation functions, for large wave-numbers and both for small ti and in the limit ti → ∞. At small ti, the leading contribution at large wave-numbers is logarithmically equivalent to -α (ɛL ) 2 /3|∑tip→ i|2, where α is a non-universal constant, L is the integral scale, and ɛ is the mean energy injection rate. For the 2-point function, the (tp)2 dependence is known to originate from the sweeping effect. The derived formula embodies the generalization of the effect of sweeping to n-point correlation functions. At large wave-numbers and large ti, we show that the ti2 dependence in the leading order contribution crosses over to a |ti| dependence. The expression of the correlation functions in this regime was not derived before, even for the 2-point function. Both predictions can be tested in direct numerical simulations and in experiments.
Nonlinear dynamic analysis and optimal trajectory planning of a high-speed macro-micro manipulator
NASA Astrophysics Data System (ADS)
Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Zhao, Xiao-wei
2017-09-01
This paper reports the nonlinear dynamic modeling and the optimal trajectory planning for a flexure-based macro-micro manipulator, which is dedicated to the large-scale and high-speed tasks. In particular, a macro- micro manipulator composed of a servo motor, a rigid arm and a compliant microgripper is focused. Moreover, both flexure hinges and flexible beams are considered. By combining the pseudorigid-body-model method, the assumed mode method and the Lagrange equation, the overall dynamic model is derived. Then, the rigid-flexible-coupling characteristics are analyzed by numerical simulations. After that, the microscopic scale vibration excited by the large-scale motion is reduced through the trajectory planning approach. Especially, a fitness function regards the comprehensive excitation torque of the compliant microgripper is proposed. The reference curve and the interpolation curve using the quintic polynomial trajectories are adopted. Afterwards, an improved genetic algorithm is used to identify the optimal trajectory by minimizing the fitness function. Finally, the numerical simulations and experiments validate the feasibility and the effectiveness of the established dynamic model and the trajectory planning approach. The amplitude of the residual vibration reduces approximately 54.9%, and the settling time decreases 57.1%. Therefore, the operation efficiency and manipulation stability are significantly improved.
Investigating smoke's influence on primary production throughout the Amazon
NASA Astrophysics Data System (ADS)
Flanner, M. G.; Mahowald, N. M.; Zender, C. S.; Randerson, J. T.; Tosca, M. G.
2007-12-01
Smoke from annual burning in the Amazon causes large reduction in surface insolation and increases the diffuse fraction of photosynthetically-active radiation (PAR). These effects have competing influence on gross primary production (GPP). Recent studies indicate that the sign of net influence depends on aerosol optical depth, but the magnitude of smoke's effect on continental-scale carbon cycling is very poorly constrained and may constitute an important term of fire's net impact on carbon storage. To investigate widespread effects of Amazon smoke on surface radiation properties, we apply a version of the NCAR Community Atmosphere Model with prognostic aerosol transport, driven with re-analysis winds. Carbon aerosol emissions are derived from the Global Fire Emissions Database (GFED). We use AERONET observations to identify model biases in aerosol optical depth, single-scatter albedo, and surface radiative forcing, and prescribe new aerosol optical properties based on field observations to improve model agreement with AERONET data. Finally, we quantify a potential range of smoke-induced change in large-scale GPP based on: 1) ground measurements of GPP in the Amazon as a function of aerosol optical depth and diffuse fraction of PAR, and 2) empirical functions of ecosystem-scale photosynthesis rates currently employed in models such as the Community Land Model (CLM).
Robust prediction of individual creative ability from brain functional connectivity.
Beaty, Roger E; Kenett, Yoed N; Christensen, Alexander P; Rosenberg, Monica D; Benedek, Mathias; Chen, Qunlin; Fink, Andreas; Qiu, Jiang; Kwapil, Thomas R; Kane, Michael J; Silvia, Paul J
2018-01-30
People's ability to think creatively is a primary means of technological and cultural progress, yet the neural architecture of the highly creative brain remains largely undefined. Here, we employed a recently developed method in functional brain imaging analysis-connectome-based predictive modeling-to identify a brain network associated with high-creative ability, using functional magnetic resonance imaging (fMRI) data acquired from 163 participants engaged in a classic divergent thinking task. At the behavioral level, we found a strong correlation between creative thinking ability and self-reported creative behavior and accomplishment in the arts and sciences ( r = 0.54). At the neural level, we found a pattern of functional brain connectivity related to high-creative thinking ability consisting of frontal and parietal regions within default, salience, and executive brain systems. In a leave-one-out cross-validation analysis, we show that this neural model can reliably predict the creative quality of ideas generated by novel participants within the sample. Furthermore, in a series of external validation analyses using data from two independent task fMRI samples and a large task-free resting-state fMRI sample, we demonstrate robust prediction of individual creative thinking ability from the same pattern of brain connectivity. The findings thus reveal a whole-brain network associated with high-creative ability comprised of cortical hubs within default, salience, and executive systems-intrinsic functional networks that tend to work in opposition-suggesting that highly creative people are characterized by the ability to simultaneously engage these large-scale brain networks.
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.
2009-12-01
Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).
Analysis of blood-based gene expression in idiopathic Parkinson disease.
Shamir, Ron; Klein, Christine; Amar, David; Vollstedt, Eva-Juliane; Bonin, Michael; Usenovic, Marija; Wong, Yvette C; Maver, Ales; Poths, Sven; Safer, Hershel; Corvol, Jean-Christophe; Lesage, Suzanne; Lavi, Ofer; Deuschl, Günther; Kuhlenbaeumer, Gregor; Pawlack, Heike; Ulitsky, Igor; Kasten, Meike; Riess, Olaf; Brice, Alexis; Peterlin, Borut; Krainc, Dimitri
2017-10-17
To examine whether gene expression analysis of a large-scale Parkinson disease (PD) patient cohort produces a robust blood-based PD gene signature compared to previous studies that have used relatively small cohorts (≤220 samples). Whole-blood gene expression profiles were collected from a total of 523 individuals. After preprocessing, the data contained 486 gene profiles (n = 205 PD, n = 233 controls, n = 48 other neurodegenerative diseases) that were partitioned into training, validation, and independent test cohorts to identify and validate a gene signature. Batch-effect reduction and cross-validation were performed to ensure signature reliability. Finally, functional and pathway enrichment analyses were applied to the signature to identify PD-associated gene networks. A gene signature of 100 probes that mapped to 87 genes, corresponding to 64 upregulated and 23 downregulated genes differentiating between patients with idiopathic PD and controls, was identified with the training cohort and successfully replicated in both an independent validation cohort (area under the curve [AUC] = 0.79, p = 7.13E-6) and a subsequent independent test cohort (AUC = 0.74, p = 4.2E-4). Network analysis of the signature revealed gene enrichment in pathways, including metabolism, oxidation, and ubiquitination/proteasomal activity, and misregulation of mitochondria-localized genes, including downregulation of COX4I1 , ATP5A1 , and VDAC3 . We present a large-scale study of PD gene expression profiling. This work identifies a reliable blood-based PD signature and highlights the importance of large-scale patient cohorts in developing potential PD biomarkers. © 2017 American Academy of Neurology.
Zebrafish pancreas development.
Tiso, Natascia; Moro, Enrico; Argenton, Francesco
2009-11-27
An accurate understanding of the molecular events governing pancreas development can have an impact on clinical medicine related to diabetes, obesity and pancreatic cancer, diseases with a high impact in public health. Until 1996, the main animal models in which pancreas formation and differentiation could be studied were mouse and, for some instances related to early development, chicken and Xenopus. Zebrafish has penetrated this field very rapidly offering a new model of investigation; by joining functional genomics, genetics and in vivo whole mount visualization, Danio rerio has allowed large scale and fine multidimensional analysis of gene functions during pancreas formation and differentiation.
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Time-sliced perturbation theory for large scale structure I: general formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less
NASA Astrophysics Data System (ADS)
Dechant, B.; Ryu, Y.; Jiang, C.; Yang, K.
2017-12-01
Solar-induced chlorophyll fluorescence (SIF) is rapidly becoming an important tool to remotely estimate terrestrial gross primary productivity (GPP) at large spatial scales. Many findings, however, are based on empirical relationships between SIF and GPP that have been found to be dependent on plant functional types. Therefore, combining model-based analysis with observations is crucial to improve our understanding of SIF-GPP relationships. So far, most model-based results were based on SCOPE, a complex ecophysiological model with explicit description of canopy layers and a large number of parameters that may not be easily obtained reliably on large scales. Here, we report on our efforts to incorporate SIF into a two-big leaf (sun and shade) process-based model that is suitable for obtaining its inputs entirely from satellite products. We examine if the SIF-GPP relationships are consistent with the findings from SCOPE simulations and investigate if incorporation of the SIF signal into BESS can help improve GPP estimation. A case study in a rice paddy is presented.
A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex
Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing
2015-01-01
We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-making and working memory). The model displays multiple temporal hierarchies, as evidenced by contrasting responses to visual versus somatosensory stimulation. Moreover, slower prefrontal and temporal areas have a disproportionate impact on global brain dynamics. These findings establish a circuit mechanism for “temporal receptive windows” that are progressively enlarged along the cortical hierarchy, suggest an extension of time integration in decision-making from local to large circuits, and should prompt a re-evaluation of the analysis of functional connectivity (measured by fMRI or EEG/MEG) by taking into account inter-areal heterogeneity. PMID:26439530
Worobec, E A; Martin, N L; McCubbin, W D; Kay, C M; Brayer, G D; Hancock, R E
1988-04-07
A large-scale purification scheme was developed for lipopolysaccharide-free protein P, the phosphate-starvation-inducible outer-membrane porin from Pseudomonas aeruginosa. This highly purified protein P was used to successfully form hexagonal crystals in the presence of n-octyl-beta-glucopyranoside. Amino-acid analysis indicated that protein P had a similar composition to other bacterial outer membrane proteins, containing a high percentage (50%) of hydrophilic residues. The amino-terminal sequence of this protein, although not homologous to either outer membrane protein, PhoE or OmpF, of Escherichia coli, was found to have an analogous protein-folding pattern. Protein P in the native trimer form was capable of maintaining a stable functional trimer after proteinase cleavage. This suggested the existence of a strongly associated tertiary and quaternary structure. Circular dichroism studies confirmed these results in that a large proportion of the protein structure was determined to be beta-sheet and resistant to acid pH and heating in 0.1% sodium dodecyl sulphate.
Characterization of a piezo bendable X-ray mirror.
Vannoni, Maurizio; Freijo Martín, Idoia; Siewert, Frank; Signorato, Riccardo; Yang, Fan; Sinn, Harald
2016-01-01
A full-scale piezo bendable mirror built as a prototype for an offset mirror at the European XFEL is characterized. The piezo ceramic elements are glued onto the mirror substrate, side-face on with respect to the reflecting surface. Using a nanometre optical component measuring machine and a large-aperture Fizeau interferometer, the mirror profile and influence functions were characterized, and further analysis was made to investigate the junction effect, hysteresis, twisting and reproducibility.
Transition to organized behavior on suspensions of concentrated bacteria
NASA Astrophysics Data System (ADS)
Ganguly, Sujoy; Cisneros, Luis; Kessler, John; Goldstein, Raymond
2008-11-01
Concentrated populations of the swimming bacterium Bacillus subtilis develop a collective phase, the Zooming BioNematic, that exhibits large-scale coherence analogous to the molecular alignment of nematic liquid crystals. Bacterial suspensions were prepared in order to experimentally measure the transition to organized behavior as a function of the cell number concentration. PIV analysis was used to obtain cell velocities and define an order parameter in order to characterize the dynamics of the system.
NASA Astrophysics Data System (ADS)
Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng
2013-04-01
This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.
NASA Astrophysics Data System (ADS)
Cozzoli, Francesco; Smolders, Sven; Eelkema, Menno; Ysebaert, Tom; Escaravage, Vincent; Temmerman, Stijn; Meire, Patrick; Herman, Peter M. J.; Bouma, Tjeerd J.
2017-01-01
The natural coastal hydrodynamics and morphology worldwide is altered by human interventions such as embankments, shipping and dredging, which may have consequences for ecosystem functionality. To ensure long-term ecological sustainability, requires capability to predict long-term large-scale ecological effects of altered hydromorphology. As empirical data sets at relevant scales are missing, there is need for integrating ecological modeling with physical modeling. This paper presents a case study showing the long-term, large-scale macrozoobenthic community response to two contrasting human alterations of the hydromorphological habitat: deepening of estuarine channels to enhance navigability (Westerschelde) vs. realization of a storm surge barrier to enhance coastal safety (Oosterschelde). A multidisciplinary integration of empirical data and modeling of estuarine morphology, hydrodynamics and benthic ecology was used to reconstruct the hydrological evolution and resulting long-term (50 years) large-scale ecological trends for both estuaries over the last. Our model indicated that hydrodynamic alterations following the deepening of the Westerschelde had negative implications for benthic life, while the realization of the Oosterschelde storm surge barriers had mixed and habitat-dependent responses, that also include unexpected improvement of environmental quality. Our analysis illustrates long-term trends in the natural community caused by opposing management strategies. The divergent human pressures on the Oosterschelde and Westerschelde are examples of what could happen in a near future for many global coastal ecosystems. The comparative analysis of the two basins is a valuable source of information to understand (and communicate) the future ecological consequences of human coastal development.
Rudolph, Marc D; Graham, Alice M; Feczko, Eric; Miranda-Dominguez, Oscar; Rasmussen, Jerod M; Nardos, Rahel; Entringer, Sonja; Wadhwa, Pathik D; Buss, Claudia; Fair, Damien A
2018-05-01
Several lines of evidence support the link between maternal inflammation during pregnancy and increased likelihood of neurodevelopmental and psychiatric disorders in offspring. This longitudinal study seeks to advance understanding regarding implications of systemic maternal inflammation during pregnancy, indexed by plasma interleukin-6 (IL-6) concentrations, for large-scale brain system development and emerging executive function skills in offspring. We assessed maternal IL-6 during pregnancy, functional magnetic resonance imaging acquired in neonates, and working memory (an important component of executive function) at 2 years of age. Functional connectivity within and between multiple neonatal brain networks can be modeled to estimate maternal IL-6 concentrations during pregnancy. Brain regions heavily weighted in these models overlap substantially with those supporting working memory in a large meta-analysis. Maternal IL-6 also directly accounts for a portion of the variance of working memory at 2 years of age. Findings highlight the association of maternal inflammation during pregnancy with the developing functional architecture of the brain and emerging executive function.
Cormier, Alexandre; Avia, Komlan; Sterck, Lieven; Derrien, Thomas; Wucher, Valentin; Andres, Gwendoline; Monsoor, Misharl; Godfroy, Olivier; Lipinska, Agnieszka; Perrineau, Marie-Mathilde; Van De Peer, Yves; Hitte, Christophe; Corre, Erwan; Coelho, Susana M; Cock, J Mark
2017-04-01
The genome of the filamentous brown alga Ectocarpus was the first to be completely sequenced from within the brown algal group and has served as a key reference genome both for this lineage and for the stramenopiles. We present a complete structural and functional reannotation of the Ectocarpus genome. The large-scale assembly of the Ectocarpus genome was significantly improved and genome-wide gene re-annotation using extensive RNA-seq data improved the structure of 11 108 existing protein-coding genes and added 2030 new loci. A genome-wide analysis of splicing isoforms identified an average of 1.6 transcripts per locus. A large number of previously undescribed noncoding genes were identified and annotated, including 717 loci that produce long noncoding RNAs. Conservation of lncRNAs between Ectocarpus and another brown alga, the kelp Saccharina japonica, suggests that at least a proportion of these loci serve a function. Finally, a large collection of single nucleotide polymorphism-based markers was developed for genetic analyses. These resources are available through an updated and improved genome database. This study significantly improves the utility of the Ectocarpus genome as a high-quality reference for the study of many important aspects of brown algal biology and as a reference for genomic analyses across the stramenopiles. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
A density spike on astrophysical scales from an N-field waterfall transition
NASA Astrophysics Data System (ADS)
Halpern, Illan F.; Hertzberg, Mark P.; Joss, Matthew A.; Sfakianakis, Evangelos I.
2015-09-01
Hybrid inflation models are especially interesting as they lead to a spike in the density power spectrum on small scales, compared to the CMB, while also satisfying current bounds on tensor modes. Here we study hybrid inflation with N waterfall fields sharing a global SO (N) symmetry. The inclusion of many waterfall fields has the obvious advantage of avoiding topologically stable defects for N > 3. We find that it also has another advantage: it is easier to engineer models that can simultaneously (i) be compatible with constraints on the primordial spectral index, which tends to otherwise disfavor hybrid models, and (ii) produce a spike on astrophysically large length scales. The latter may have significant consequences, possibly seeding the formation of astrophysically large black holes. We calculate correlation functions of the time-delay, a measure of density perturbations, produced by the waterfall fields, as a convergent power series in both 1 / N and the field's correlation function Δ (x). We show that for large N, the two-point function is < δt (x) δt (0) > ∝Δ2 (| x |) / N and the three-point function is < δt (x) δt (y) δt (0) > ∝ Δ (| x - y |) Δ (| x |) Δ (| y |) /N2. In accordance with the central limit theorem, the density perturbations on the scale of the spike are Gaussian for large N and non-Gaussian for small N.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100
2015-01-15
In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less
Tools for Large-Scale Mobile Malware Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierma, Michael
Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000more » Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.« less
D'Aiuto, Leonardo; Zhi, Yun; Kumar Das, Dhanjit; Wilcox, Madeleine R; Johnson, Jon W; McClain, Lora; MacDonald, Matthew L; Di Maio, Roberto; Schurdak, Mark E; Piazza, Paolo; Viggiano, Luigi; Sweet, Robert; Kinchington, Paul R; Bhattacharjee, Ayantika G; Yolken, Robert; Nimgaonka, Vishwajit L; Nimgaonkar, Vishwajit L
2014-01-01
Induced pluripotent stem cell (iPSC)-based technologies offer an unprecedented opportunity to perform high-throughput screening of novel drugs for neurological and neurodegenerative diseases. Such screenings require a robust and scalable method for generating large numbers of mature, differentiated neuronal cells. Currently available methods based on differentiation of embryoid bodies (EBs) or directed differentiation of adherent culture systems are either expensive or are not scalable. We developed a protocol for large-scale generation of neuronal stem cells (NSCs)/early neural progenitor cells (eNPCs) and their differentiation into neurons. Our scalable protocol allows robust and cost-effective generation of NSCs/eNPCs from iPSCs. Following culture in neurobasal medium supplemented with B27 and BDNF, NSCs/eNPCs differentiate predominantly into vesicular glutamate transporter 1 (VGLUT1) positive neurons. Targeted mass spectrometry analysis demonstrates that iPSC-derived neurons express ligand-gated channels and other synaptic proteins and whole-cell patch-clamp experiments indicate that these channels are functional. The robust and cost-effective differentiation protocol described here for large-scale generation of NSCs/eNPCs and their differentiation into neurons paves the way for automated high-throughput screening of drugs for neurological and neurodegenerative diseases.
Compressible turbulent mixing: Effects of Schmidt number.
Ni, Qionglin
2015-05-01
We investigated by numerical simulations the effects of Schmidt number on passive scalar transport in forced compressible turbulence. The range of Schmidt number (Sc) was 1/25∼25. In the inertial-convective range the scalar spectrum seemed to obey the k(-5/3) power law. For Sc≫1, there appeared a k(-1) power law in the viscous-convective range, while for Sc≪1, a k(-17/3) power law was identified in the inertial-diffusive range. The scaling constant computed by the mixed third-order structure function of the velocity-scalar increment showed that it grew over Sc, and the effect of compressibility made it smaller than the 4/3 value from incompressible turbulence. At small amplitudes, the probability distribution function (PDF) of scalar fluctuations collapsed to the Gaussian distribution whereas, at large amplitudes, it decayed more quickly than Gaussian. At large scales, the PDF of scalar increment behaved similarly to that of scalar fluctuation. In contrast, at small scales it resembled the PDF of scalar gradient. Furthermore, the scalar dissipation occurring at large magnitudes was found to grow with Sc. Due to low molecular diffusivity, in the Sc≫1 flow the scalar field rolled up and got mixed sufficiently. However, in the Sc≪1 flow the scalar field lost the small-scale structures by high molecular diffusivity and retained only the large-scale, cloudlike structures. The spectral analysis found that the spectral densities of scalar advection and dissipation in both Sc≫1 and Sc≪1 flows probably followed the k(-5/3) scaling. This indicated that in compressible turbulence the processes of advection and dissipation except that of scalar-dilatation coupling might deferring to the Kolmogorov picture. It then showed that at high wave numbers, the magnitudes of spectral coherency in both Sc≫1 and Sc≪1 flows decayed faster than the theoretical prediction of k(-2/3) for incompressible flows. Finally, the comparison with incompressible results showed that the scalar in compressible turbulence with Sc=1 lacked a conspicuous bump structure in its spectrum, but was more intermittent in the dissipative range.
NASA Astrophysics Data System (ADS)
DSouza, Adora M.; Abidin, Anas Z.; Leistritz, Lutz; Wismüller, Axel
2017-02-01
We investigate the applicability of large-scale Granger Causality (lsGC) for extracting a measure of multivariate information flow between pairs of regional brain activities from resting-state functional MRI (fMRI) and test the effectiveness of these measures for predicting a disease state. Such pairwise multivariate measures of interaction provide high-dimensional representations of connectivity profiles for each subject and are used in a machine learning task to distinguish between healthy controls and individuals presenting with symptoms of HIV Associated Neurocognitive Disorder (HAND). Cognitive impairment in several domains can occur as a result of HIV infection of the central nervous system. The current paradigm for assessing such impairment is through neuropsychological testing. With fMRI data analysis, we aim at non-invasively capturing differences in brain connectivity patterns between healthy subjects and subjects presenting with symptoms of HAND. To classify the extracted interaction patterns among brain regions, we use a prototype-based learning algorithm called Generalized Matrix Learning Vector Quantization (GMLVQ). Our approach to characterize connectivity using lsGC followed by GMLVQ for subsequent classification yields good prediction results with an accuracy of 87% and an area under the ROC curve (AUC) of up to 0.90. We obtain a statistically significant improvement (p<0.01) over a conventional Granger causality approach (accuracy = 0.76, AUC = 0.74). High accuracy and AUC values using our multivariate method to connectivity analysis suggests that our approach is able to better capture changes in interaction patterns between different brain regions when compared to conventional Granger causality analysis known from the literature.
HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.
Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J
2016-06-03
Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .
Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform
NASA Astrophysics Data System (ADS)
Pando, Jesus
1997-10-01
The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-09-04
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-06-01
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
2009-06-01
simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE
NASA Astrophysics Data System (ADS)
Qi, Juanjuan; Chen, Ke; Zhang, Shuhao; Yang, Yun; Guo, Lin; Yang, Shihe
2017-03-01
The controllable self-assembly of nanosized building blocks into larger specific structures can provide an efficient method of synthesizing novel materials with excellent properties. The self-assembly of nanocrystals by assisted means is becoming an extremely active area of research, because it provides a method of producing large-scale advanced functional materials with potential applications in the areas of energy, electronics, optics, and biologics. In this study, we applied an efficient strategy, namely, the use of ‘pressure control’ to the assembly of silver sulfide (Ag2S) nanospheres with a diameter of approximately 33 nm into large-scale, uniform Ag2S sub-microspheres with a size of about 0.33 μm. More importantly, this strategy realizes the online control of the overall reaction system, including the pressure, reaction time, and temperature, and could also be used to easily fabricate other functional materials on an industrial scale. Moreover, the thermodynamics and kinetics parameters for the thermal decomposition of silver diethyldithiocarbamate (Ag(DDTC)) are also investigated to explore the formation mechanism of the Ag2S nanosized building blocks which can be assembled into uniform sub-micron scale architecture. As a method of producing sub-micron Ag2S particles by means of the pressure-controlled self-assembly of nanoparticles, we foresee this strategy being an efficient and universally applicable option for constructing other new building blocks and assembling novel and large functional micromaterials on an industrial scale.
NASA Astrophysics Data System (ADS)
Chhiber, Rohit; Usmanov, Arcadi V.; DeForest, Craig E.; Matthaeus, William H.; Parashar, Tulasi N.; Goldstein, Melvyn L.
2018-04-01
Recent analysis of Solar-Terrestrial Relations Observatory (STEREO) imaging observations have described the early stages of the development of turbulence in the young solar wind in solar minimum conditions. Here we extend this analysis to a global magnetohydrodynamic (MHD) simulation of the corona and solar wind based on inner boundary conditions, either dipole or magnetogram type, that emulate solar minimum. The simulations have been calibrated using Ulysses and 1 au observations, and allow, within a well-understood context, a precise determination of the location of the Alfvén critical surfaces and the first plasma beta equals unity surfaces. The compatibility of the the STEREO observations and the simulations is revealed by direct comparisons. Computation of the radial evolution of second-order magnetic field structure functions in the simulations indicates a shift toward more isotropic conditions at scales of a few Gm, as seen in the STEREO observations in the range 40–60 R ⊙. We affirm that the isotropization occurs in the vicinity of the first beta unity surface. The interpretation based on early stages of in situ solar wind turbulence evolution is further elaborated, emphasizing the relationship of the observed length scales to the much smaller scales that eventually become the familiar turbulence inertial range cascade. We argue that the observed dynamics is the very early manifestation of large-scale in situ nonlinear couplings that drive turbulence and heating in the solar wind.
NASA Astrophysics Data System (ADS)
Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.
2017-05-01
The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.
Pan, Joshua; Meyers, Robin M; Michel, Brittany C; Mashtalir, Nazar; Sizemore, Ann E; Wells, Jonathan N; Cassel, Seth H; Vazquez, Francisca; Weir, Barbara A; Hahn, William C; Marsh, Joseph A; Tsherniak, Aviad; Kadoch, Cigall
2018-05-23
Protein complexes are assemblies of subunits that have co-evolved to execute one or many coordinated functions in the cellular environment. Functional annotation of mammalian protein complexes is critical to understanding biological processes, as well as disease mechanisms. Here, we used genetic co-essentiality derived from genome-scale RNAi- and CRISPR-Cas9-based fitness screens performed across hundreds of human cancer cell lines to assign measures of functional similarity. From these measures, we systematically built and characterized functional similarity networks that recapitulate known structural and functional features of well-studied protein complexes and resolve novel functional modules within complexes lacking structural resolution, such as the mammalian SWI/SNF complex. Finally, by integrating functional networks with large protein-protein interaction networks, we discovered novel protein complexes involving recently evolved genes of unknown function. Taken together, these findings demonstrate the utility of genetic perturbation screens alone, and in combination with large-scale biophysical data, to enhance our understanding of mammalian protein complexes in normal and disease states. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Universal scaling function in discrete time asymmetric exclusion processes
NASA Astrophysics Data System (ADS)
Chia, Nicholas; Bundschuh, Ralf
2005-03-01
In the universality class of the one dimensional Kardar-Parisi-Zhang surface growth, Derrida and Lebowitz conjectured the universality of not only the scaling exponents, but of an entire scaling function. Since Derrida and Lebowitz' original publication this universality has been verified for a variety of continuous time systems in the KPZ universality class. We study the Derrida-Lebowitz scaling function for multi-particle versions of the discrete time Asymmetric Exclusion Process. We find that in this discrete time system the Derrida-Lebowitz scaling function not only properly characterizes the large system size limit, but even accurately describes surprisingly small systems. These results have immediate applications in searching biological sequence databases.
Wolstencroft, J; Robinson, L; Srinivasan, R; Kerry, E; Mandy, W; Skuse, D
2018-07-01
Group social skills interventions (GSSIs) are a commonly offered treatment for children with high functioning ASD. We critically evaluated GSSI randomised controlled trials for those aged 6-25 years. Our meta-analysis of outcomes emphasised internal validity, thus was restricted to trials that used the parent-report social responsiveness scale (SRS) or the social skills rating system (SSRS). Large positive effect sizes were found for the SRS total score, plus the social communication and restricted interests and repetitive behaviours subscales. The SSRS social skills subscale improved with moderate effect size. Moderator analysis of the SRS showed that GSSIs that include parent-groups, and are of greater duration or intensity, obtained larger effect sizes. We recommend future trials distinguish gains in children's social knowledge from social performance.
ERIC Educational Resources Information Center
Steiner-Khamsi, Gita; Appleton, Margaret; Vellani, Shezleen
2018-01-01
The media analysis is situated in the larger body of studies that explore the varied reasons why different policy actors advocate for international large-scale student assessments (ILSAs) and adds to the research on the fast advance of the global education industry. The analysis of "The Economist," "Financial Times," and…
ERIC Educational Resources Information Center
Hampden-Thompson, Gillian; Lubben, Fred; Bennett, Judith
2011-01-01
Quantitative secondary analysis of large-scale data can be combined with in-depth qualitative methods. In this paper, we discuss the role of this combined methods approach in examining the uptake of physics and chemistry in post compulsory schooling for students in England. The secondary data analysis of the National Pupil Database (NPD) served…
Sensitivity analysis for large-scale problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force
NASA Astrophysics Data System (ADS)
Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.
2016-01-01
The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.
Large-Angular-Scale Clustering as a Clue to the Source of UHECRs
NASA Astrophysics Data System (ADS)
Berlind, Andreas A.; Farrar, Glennys R.
We explore what can be learned about the sources of UHECRs from their large-angular-scale clustering (referred to as their "bias" by the cosmology community). Exploiting the clustering on large scales has the advantage over small-scale correlations of being insensitive to uncertainties in source direction from magnetic smearing or measurement error. In a Cold Dark Matter cosmology, the amplitude of large-scale clustering depends on the mass of the system, with more massive systems such as galaxy clusters clustering more strongly than less massive systems such as ordinary galaxies or AGN. Therefore, studying the large-scale clustering of UHECRs can help determine a mass scale for their sources, given the assumption that their redshift depth is as expected from the GZK cutoff. We investigate the constraining power of a given UHECR sample as a function of its cutoff energy and number of events. We show that current and future samples should be able to distinguish between the cases of their sources being galaxy clusters, ordinary galaxies, or sources that are uncorrelated with the large-scale structure of the universe.
Schultz, Simon R; Copeland, Caroline S; Foust, Amanda J; Quicke, Peter; Schuck, Renaud
2017-01-01
Recent years have seen substantial developments in technology for imaging neural circuits, raising the prospect of large scale imaging studies of neural populations involved in information processing, with the potential to lead to step changes in our understanding of brain function and dysfunction. In this article we will review some key recent advances: improved fluorophores for single cell resolution functional neuroimaging using a two photon microscope; improved approaches to the problem of scanning active circuits; and the prospect of scanless microscopes which overcome some of the bandwidth limitations of current imaging techniques. These advances in technology for experimental neuroscience have in themselves led to technical challenges, such as the need for the development of novel signal processing and data analysis tools in order to make the most of the new experimental tools. We review recent work in some active topics, such as region of interest segmentation algorithms capable of demixing overlapping signals, and new highly accurate algorithms for calcium transient detection. These advances motivate the development of new data analysis tools capable of dealing with spatial or spatiotemporal patterns of neural activity, that scale well with pattern size.
Schultz, Simon R.; Copeland, Caroline S.; Foust, Amanda J.; Quicke, Peter; Schuck, Renaud
2017-01-01
Recent years have seen substantial developments in technology for imaging neural circuits, raising the prospect of large scale imaging studies of neural populations involved in information processing, with the potential to lead to step changes in our understanding of brain function and dysfunction. In this article we will review some key recent advances: improved fluorophores for single cell resolution functional neuroimaging using a two photon microscope; improved approaches to the problem of scanning active circuits; and the prospect of scanless microscopes which overcome some of the bandwidth limitations of current imaging techniques. These advances in technology for experimental neuroscience have in themselves led to technical challenges, such as the need for the development of novel signal processing and data analysis tools in order to make the most of the new experimental tools. We review recent work in some active topics, such as region of interest segmentation algorithms capable of demixing overlapping signals, and new highly accurate algorithms for calcium transient detection. These advances motivate the development of new data analysis tools capable of dealing with spatial or spatiotemporal patterns of neural activity, that scale well with pattern size. PMID:28757657
Functional sequencing read annotation for high precision microbiome analysis
Zhu, Chengsheng; Miller, Maximilian; Marpaka, Srinayani; Vaysberg, Pavel; Rühlemann, Malte C; Wu, Guojun; Heinsen, Femke-Anouska; Tempel, Marie; Zhao, Liping; Lieb, Wolfgang; Franke, Andre; Bromberg, Yana
2018-01-01
Abstract The vast majority of microorganisms on Earth reside in often-inseparable environment-specific communities—microbiomes. Meta-genomic/-transcriptomic sequencing could reveal the otherwise inaccessible functionality of microbiomes. However, existing analytical approaches focus on attributing sequencing reads to known genes/genomes, often failing to make maximal use of available data. We created faser (functional annotation of sequencing reads), an algorithm that is optimized to map reads to molecular functions encoded by the read-correspondent genes. The mi-faser microbiome analysis pipeline, combining faser with our manually curated reference database of protein functions, accurately annotates microbiome molecular functionality. mi-faser’s minutes-per-microbiome processing speed is significantly faster than that of other methods, allowing for large scale comparisons. Microbiome function vectors can be compared between different conditions to highlight environment-specific and/or time-dependent changes in functionality. Here, we identified previously unseen oil degradation-specific functions in BP oil-spill data, as well as functional signatures of individual-specific gut microbiome responses to a dietary intervention in children with Prader–Willi syndrome. Our method also revealed variability in Crohn's Disease patient microbiomes and clearly distinguished them from those of related healthy individuals. Our analysis highlighted the microbiome role in CD pathogenicity, demonstrating enrichment of patient microbiomes in functions that promote inflammation and that help bacteria survive it. PMID:29194524
Goldstone models of modified gravity
NASA Astrophysics Data System (ADS)
Brax, Philippe; Valageas, Patrick
2017-02-01
We investigate scalar-tensor theories where matter couples to the scalar field via a kinetically dependent conformal coupling. These models can be seen as the low-energy description of invariant field theories under a global Abelian symmetry. The scalar field is then identified with the Goldstone mode of the broken symmetry. It turns out that the properties of these models are very similar to the ones of ultralocal theories where the scalar-field value is directly determined by the local matter density. This leads to a complete screening of the fifth force in the Solar System and between compact objects, through the ultralocal screening mechanism. On the other hand, the fifth force can have large effects in extended structures with large-scale density gradients, such as galactic halos. Interestingly, it can either amplify or damp Newtonian gravity, depending on the model parameters. We also study the background cosmology and the linear cosmological perturbations. The background cosmology is hardly different from its Λ -CDM counterpart while cosmological perturbations crucially depend on whether the coupling function is convex or concave. For concave functions, growth is hindered by the repulsiveness of the fifth force while it is enhanced in the convex case. In both cases, the departures from the Λ -CDM cosmology increase on smaller scales and peak for galactic structures. For concave functions, the formation of structure is largely altered below some characteristic mass, as smaller structures are delayed and would form later through fragmentation, as in some warm dark matter scenarios. For convex models, small structures form more easily than in the Λ -CDM scenario. This could lead to an over-abundance of small clumps. We use a thermodynamic analysis and show that although convex models have a phase transition between homogeneous and inhomogeneous phases, on cosmological scales the system does not enter the inhomogeneous phase. On the other hand, for galactic halos, the coexistence of small and large substructures in their outer regions could lead to observational signatures of these models.
The calculations of small molecular conformation energy differences by density functional method
NASA Astrophysics Data System (ADS)
Topol, I. A.; Burt, S. K.
1993-03-01
The differences in the conformational energies for the gauche (G) and trans(T) conformers of 1,2-difluoroethane and for myo-and scyllo-conformer of inositol have been calculated by local density functional method (LDF approximation) with geometry optimization using different sets of calculation parameters. It is shown that in the contrast to Hartree—Fock methods, density functional calculations reproduce the correct sign and value of the gauche effect for 1,2-difluoroethane and energy difference for both conformers of inositol. The results of normal vibrational analysis for1,2-difluoroethane showed that harmonic frequencies calculated in LDF approximation agree with experimental data with the accuracy typical for scaled large basis set Hartree—Fock calculations.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Shedding new light on opsin evolution
Porter, Megan L.; Blasic, Joseph R.; Bok, Michael J.; Cameron, Evan G.; Pringle, Thomas; Cronin, Thomas W.; Robinson, Phyllis R.
2012-01-01
Opsin proteins are essential molecules in mediating the ability of animals to detect and use light for diverse biological functions. Therefore, understanding the evolutionary history of opsins is key to understanding the evolution of light detection and photoreception in animals. As genomic data have appeared and rapidly expanded in quantity, it has become possible to analyse opsins that functionally and histologically are less well characterized, and thus to examine opsin evolution strictly from a genetic perspective. We have incorporated these new data into a large-scale, genome-based analysis of opsin evolution. We use an extensive phylogeny of currently known opsin sequence diversity as a foundation for examining the evolutionary distributions of key functional features within the opsin clade. This new analysis illustrates the lability of opsin protein-expression patterns, site-specific functionality (i.e. counterion position) and G-protein binding interactions. Further, it demonstrates the limitations of current model organisms, and highlights the need for further characterization of many of the opsin sequence groups with unknown function. PMID:22012981
Quantitative analysis of voids in percolating structures in two-dimensional N-body simulations
NASA Technical Reports Server (NTRS)
Harrington, Patrick M.; Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
We present in this paper a quantitative method for defining void size in large-scale structure based on percolation threshold density. Beginning with two-dimensional gravitational clustering simulations smoothed to the threshold of nonlinearity, we perform percolation analysis to determine the large scale structure. The resulting objective definition of voids has a natural scaling property, is topologically interesting, and can be applied immediately to redshift surveys.
Effect of helicity on the correlation time of large scales in turbulent flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2017-11-01
Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.
Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach
Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A.
2016-01-01
Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain–computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a “containerized” approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data “Levels,” each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org). PMID:27014048
Garg, Abhishek D.; De Ruysscher, Dirk; Agostinis, Patrizia
2016-01-01
ABSTRACT The emerging role of the cancer cell-immune cell interface in shaping tumorigenesis/anticancer immunotherapy has increased the need to identify prognostic biomarkers. Henceforth, our primary aim was to identify the immunogenic cell death (ICD)-derived metagene signatures in breast, lung and ovarian cancer that associate with improved patient survival. To this end, we analyzed the prognostic impact of differential gene-expression of 33 pre-clinically-validated ICD-parameters through a large-scale meta-analysis involving 3,983 patients (‘discovery’ dataset) across lung (1,432), breast (1,115) and ovarian (1,436) malignancies. The main results were also substantiated in ‘validation’ datasets consisting of 818 patients of same cancer-types (i.e. 285 breast/274 lung/259 ovarian). The ICD-associated parameters exhibited a highly-clustered and largely cancer type-specific prognostic impact. Interestingly, we delineated ICD-derived consensus-metagene signatures that exhibited a positive prognostic impact that was either cancer type-independent or specific. Importantly, most of these ICD-derived consensus-metagenes (acted as attractor-metagenes and thereby) ‘attracted’ highly co-expressing sets of genes or convergent-metagenes. These convergent-metagenes also exhibited positive prognostic impact in respective cancer types. Remarkably, we found that the cancer type-independent consensus-metagene acted as an ‘attractor’ for cancer-specific convergent-metagenes. This reaffirms that the immunological prognostic landscape of cancer tends to segregate between cancer-independent and cancer-type specific gene signatures. Moreover, this prognostic landscape was largely dominated by the classical T cell activity/infiltration/function-related biomarkers. Interestingly, each cancer type tended to associate with biomarkers representing a specific T cell activity or function rather than pan-T cell biomarkers. Thus, our analysis confirms that ICD can serve as a platform for discovery of novel prognostic metagenes. PMID:27057433
Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa
2015-01-01
Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.
Boatwright, J.; Bundock, H.; Luetgert, J.; Seekins, L.; Gee, L.; Lombard, P.
2003-01-01
We analyze peak ground velocity (PGV) and peak ground acceleration (PGA) data from 95 moderate (3.5 ??? M 100 km, the peak motions attenuate more rapidly than a simple power law (that is, r-??) can fit. Instead, we use an attenuation function that combines a fixed power law (r-0.7) with a fitted exponential dependence on distance, which is estimated as expt(-0.0063r) and exp(-0.0073r) for PGV and PGA, respectively, for moderate earthquakes. We regress log(PGV) and log(PGA) as functions of distance and magnitude. We assume that the scaling of log(PGV) and log(PGA) with magnitude can differ for moderate and large earthquakes, but must be continuous. Because the frequencies that carry PGV and PGA can vary with earthquake size for large earthquakes, the regression for large earthquakes incorporates a magnitude dependence in the exponential attenuation function. We fix the scaling break between moderate and large earthquakes at M 5.5; log(PGV) and log(PGA) scale as 1.06M and 1.00M, respectively, for moderate earthquakes and 0.58M and 0.31M for large earthquakes.
Function Invariant and Parameter Scale-Free Transformation Methods
ERIC Educational Resources Information Center
Bentler, P. M.; Wingard, Joseph A.
1977-01-01
A scale-invariant simple structure function of previously studied function components for principal component analysis and factor analysis is defined. First and second partial derivatives are obtained, and Newton-Raphson iterations are utilized. The resulting solutions are locally optimal and subjectively pleasing. (Author/JKS)
Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures
NASA Astrophysics Data System (ADS)
Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi
2017-04-01
Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
StePS: Stereographically Projected Cosmological Simulations
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-05-01
StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.
NASA Astrophysics Data System (ADS)
Glover, David M.; Doney, Scott C.; Oestreich, William K.; Tullo, Alisdair W.
2018-01-01
Mesoscale (10-300 km, weeks to months) physical variability strongly modulates the structure and dynamics of planktonic marine ecosystems via both turbulent advection and environmental impacts upon biological rates. Using structure function analysis (geostatistics), we quantify the mesoscale biological signals within global 13 year SeaWiFS (1998-2010) and 8 year MODIS/Aqua (2003-2010) chlorophyll a ocean color data (Level-3, 9 km resolution). We present geographical distributions, seasonality, and interannual variability of key geostatistical parameters: unresolved variability or noise, resolved variability, and spatial range. Resolved variability is nearly identical for both instruments, indicating that geostatistical techniques isolate a robust measure of biophysical mesoscale variability largely independent of measurement platform. In contrast, unresolved variability in MODIS/Aqua is substantially lower than in SeaWiFS, especially in oligotrophic waters where previous analysis identified a problem for the SeaWiFS instrument likely due to sensor noise characteristics. Both records exhibit a statistically significant relationship between resolved mesoscale variability and the low-pass filtered chlorophyll field horizontal gradient magnitude, consistent with physical stirring acting on large-scale gradient as an important factor supporting observed mesoscale variability. Comparable horizontal length scales for variability are found from tracer-based scaling arguments and geostatistical decorrelation. Regional variations between these length scales may reflect scale dependence of biological mechanisms that also create variability directly at the mesoscale, for example, enhanced net phytoplankton growth in coastal and frontal upwelling and convective mixing regions. Global estimates of mesoscale biophysical variability provide an improved basis for evaluating higher resolution, coupled ecosystem-ocean general circulation models, and data assimilation.
GOEAST: a web-based software toolkit for Gene Ontology enrichment analysis.
Zheng, Qi; Wang, Xiu-Jie
2008-07-01
Gene Ontology (GO) analysis has become a commonly used approach for functional studies of large-scale genomic or transcriptomic data. Although there have been a lot of software with GO-related analysis functions, new tools are still needed to meet the requirements for data generated by newly developed technologies or for advanced analysis purpose. Here, we present a Gene Ontology Enrichment Analysis Software Toolkit (GOEAST), an easy-to-use web-based toolkit that identifies statistically overrepresented GO terms within given gene sets. Compared with available GO analysis tools, GOEAST has the following improved features: (i) GOEAST displays enriched GO terms in graphical format according to their relationships in the hierarchical tree of each GO category (biological process, molecular function and cellular component), therefore, provides better understanding of the correlations among enriched GO terms; (ii) GOEAST supports analysis for data from various sources (probe or probe set IDs of Affymetrix, Illumina, Agilent or customized microarrays, as well as different gene identifiers) and multiple species (about 60 prokaryote and eukaryote species); (iii) One unique feature of GOEAST is to allow cross comparison of the GO enrichment status of multiple experiments to identify functional correlations among them. GOEAST also provides rigorous statistical tests to enhance the reliability of analysis results. GOEAST is freely accessible at http://omicslab.genetics.ac.cn/GOEAST/
Poulsen, Ingrid; Kreiner, Svend; Engberg, Aase W
2018-02-13
The Early Functional Abilities scale assesses the restoration of brain function after brain injury, based on 4 dimensions. The primary objective of this study was to evaluate the validity, objectivity, reliability and measurement precision of the Early Functional Abilities scale by Rasch model item analysis. A secondary objective was to examine the relationship between the Early Functional Abilities scale and the Functional Independence Measurement™, in order to establish the criterion validity of the Early Functional Abilities scale and to compare the sensitivity of measurements using the 2 instruments. The Rasch analysis was based on the assessment of 408 adult patients at admission to sub-acute rehabilitation in Copenhagen, Denmark after traumatic brain injury. The Early Functional Abilities scale provides valid and objective measurement of vegetative (autonomic), facio-oral, sensorimotor and communicative/cognitive functions. Removal of one item from the sensorimotor scale confirmed unidimensionality for each of the 4 subscales, but not for the entire scale. The Early Functional Abilities subscales are sensitive to differences between patients in ranges in which the Functional Independence Measurement™ has a floor effect. The Early Functional Abilities scale assesses the early recovery of important aspects of brain function after traumatic brain injury, but is not unidimensional. We recommend removal of the "standing" item and calculation of summary subscales for the separate dimensions.
Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong
2015-01-01
Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.
Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package “DensParcorr” can be downloaded from CRAN for implementing the proposed statistical methods. PMID:27242395
Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis
Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal
2016-01-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Male group size, female distribution and changes in sexual segregation by Roosevelt elk
Peterson, Leah M.
2017-01-01
Sexual segregation, or the differential use of space by males and females, is hypothesized to be a function of body size dimorphism. Sexual segregation can also manifest at small (social segregation) and large (habitat segregation) spatial scales for a variety of reasons. Furthermore, the connection between small- and large-scale sexual segregation has rarely been addressed. We studied a population of Roosevelt elk (Cervus elaphus roosevelti) across 21 years in north coastal California, USA, to assess small- and large-scale sexual segregation in winter. We hypothesized that male group size would associate with small-scale segregation and that a change in female distribution would associate with large-scale segregation. Variation in forage biomass might also be coupled to small and large-scale sexual segregation. Our findings were consistent with male group size associating with small-scale segregation and a change in female distribution associating with large-scale segregation. Females appeared to avoid large groups comprised of socially dominant males. Males appeared to occupy a habitat vacated by females because of a wider forage niche, greater tolerance to lethal risks, and, perhaps, to reduce encounters with other elk. Sexual segregation at both spatial scales was a poor predictor of forage biomass. Size dimorphism was coupled to change in sexual segregation at small and large spatial scales. Small scale segregation can seemingly manifest when all forage habitat is occupied by females and large scale segregation might happen when some forage habitat is not occupied by females. PMID:29121076
PIRATE: pediatric imaging response assessment and targeting environment
NASA Astrophysics Data System (ADS)
Glenn, Russell; Zhang, Yong; Krasin, Matthew; Hua, Chiaho
2010-02-01
By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have developed an integrated multimodality, functional and anatomical imaging analysis software package for target definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI) analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software, histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden. Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.
NASA Astrophysics Data System (ADS)
Yang, Y.; Gan, T. Y.; Tan, X.
2017-12-01
In the past few decades, there have been more extreme climate events around the world, and Canada has also suffered from numerous extreme precipitation events. In this paper, trend analysis, change point analysis, probability distribution function, principal component analysis and wavelet analysis were used to investigate the spatial and temporal patterns of extreme precipitation in Canada. Ten extreme precipitation indices were calculated using long-term daily precipitation data from 164 gauging stations. Several large-scale climate patterns such as El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), Pacific-North American (PNA), and North Atlantic Oscillation (NAO) were selected to analyze the relationships between extreme precipitation and climate indices. Convective Available Potential Energy (CAPE), specific humidity, and surface temperature were employed to investigate the potential causes of the trends.The results show statistically significant positive trends for most indices, which indicate increasing extreme precipitation. The majority of indices display more increasing trends along the southern border of Canada while decreasing trends dominate in the central Canadian Prairies (CP). In addition, strong connections are found between the extreme precipitation and climate indices and the effects of climate pattern differ for each region. The seasonal CAPE, specific humidity, and temperature are found to be closely related to Canadian extreme precipitation.
Rebling, Johannes; Estrada, Héctor; Gottschalk, Sven; Sela, Gali; Zwack, Michael; Wissmeyer, Georg; Ntziachristos, Vasilis; Razansky, Daniel
2018-04-19
A critical link exists between pathological changes of cerebral vasculature and diseases affecting brain function. Microscopic techniques have played an indispensable role in the study of neurovascular anatomy and functions. Yet, investigations are often hindered by suboptimal trade-offs between the spatiotemporal resolution, field-of-view (FOV) and type of contrast offered by the existing optical microscopy techniques. We present a hybrid dual-wavelength optoacoustic (OA) biomicroscope capable of rapid transcranial visualization of large-scale cerebral vascular networks. The system offers 3-dimensional views of the morphology and oxygenation status of the cerebral vasculature with single capillary resolution and a FOV exceeding 6 × 8 mm 2 , thus covering the entire cortical vasculature in mice. The large-scale OA imaging capacity is complemented by simultaneously acquired pulse-echo ultrasound (US) biomicroscopy scans of the mouse skull. The new approach holds great potential to provide better insights into cerebrovascular function and facilitate efficient studies into neurological and vascular abnormalities of the brain. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Holtorf, Hauke; Guitton, Marie-Christine; Reski, Ralf
2002-04-01
Functional genome analysis of plants has entered the high-throughput stage. The complete genome information from key species such as Arabidopsis thaliana and rice is now available and will further boost the application of a range of new technologies to functional plant gene analysis. To broadly assign functions to unknown genes, different fast and multiparallel approaches are currently used and developed. These new technologies are based on known methods but are adapted and improved to accommodate for comprehensive, large-scale gene analysis, i.e. such techniques are novel in the sense that their design allows researchers to analyse many genes at the same time and at an unprecedented pace. Such methods allow analysis of the different constituents of the cell that help to deduce gene function, namely the transcripts, proteins and metabolites. Similarly the phenotypic variations of entire mutant collections can now be analysed in a much faster and more efficient way than before. The different methodologies have developed to form their own fields within the functional genomics technological platform and are termed transcriptomics, proteomics, metabolomics and phenomics. Gene function, however, cannot solely be inferred by using only one such approach. Rather, it is only by bringing together all the information collected by different functional genomic tools that one will be able to unequivocally assign functions to unknown plant genes. This review focuses on current technical developments and their impact on the field of plant functional genomics. The lower plant Physcomitrella is introduced as a new model system for gene function analysis, owing to its high rate of homologous recombination.
NASA Astrophysics Data System (ADS)
Dednam, W.; Botha, A. E.
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.
Raguz Nakic, Zrinka; Seisenbacher, Gerhard; Posas, Francesc; Sauer, Uwe
2016-11-15
Coordinated through a complex network of kinases and phosphatases, protein phosphorylation regulates essentially all cellular processes in eukaryotes. Recent advances in proteomics enable detection of thousands of phosphorylation sites (phosphosites) in single experiments. However, functionality of the vast majority of these sites remains unclear and we lack suitable approaches to evaluate functional relevance at a pace that matches their detection. Here, we assess functionality of 26 phosphosites by introducing phosphodeletion and phosphomimic mutations in 25 metabolic enzymes and regulators from the TOR and HOG signaling pathway in Saccharomyces cerevisiae by phenotypic analysis and untargeted metabolomics. We show that metabolomics largely outperforms growth analysis and recovers 10 out of the 13 previously characterized phosphosites and suggests functionality for several novel sites, including S79 on the TOR regulatory protein Tip41. We analyze metabolic profiles to identify consequences underlying regulatory phosphorylation events and detecting glycerol metabolism to have a so far unknown influence on arginine metabolism via phosphoregulation of the glycerol dehydrogenases. Further, we also find S508 in the MAPKK Pbs2 as a potential link for cross-talking between HOG signaling and the cell wall integrity pathway. We demonstrate that metabolic profiles can be exploited for gaining insight into regulatory consequences and biological roles of phosphosites. Altogether, untargeted metabolomics is a fast, sensitive and informative approach appropriate for future large-scale functional analyses of phosphosites.
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
NASA Astrophysics Data System (ADS)
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
Simonyan, Kristina; Fuertinger, Stefan
2015-04-01
Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. Copyright © 2015 the American Physiological Society.
HAFNI-enabled largescale platform for neuroimaging informatics (HELPNI).
Makkie, Milad; Zhao, Shijie; Jiang, Xi; Lv, Jinglei; Zhao, Yu; Ge, Bao; Li, Xiang; Han, Junwei; Liu, Tianming
Tremendous efforts have thus been devoted on the establishment of functional MRI informatics systems that recruit a comprehensive collection of statistical/computational approaches for fMRI data analysis. However, the state-of-the-art fMRI informatics systems are especially designed for specific fMRI sessions or studies of which the data size is not really big, and thus has difficulty in handling fMRI 'big data.' Given the size of fMRI data are growing explosively recently due to the advancement of neuroimaging technologies, an effective and efficient fMRI informatics system which can process and analyze fMRI big data is much needed. To address this challenge, in this work, we introduce our newly developed informatics platform, namely, 'HAFNI-enabled largescale platform for neuroimaging informatics (HELPNI).' HELPNI implements our recently developed computational framework of sparse representation of whole-brain fMRI signals which is called holistic atlases of functional networks and interactions (HAFNI) for fMRI data analysis. HELPNI provides integrated solutions to archive and process large-scale fMRI data automatically and structurally, to extract and visualize meaningful results information from raw fMRI data, and to share open-access processed and raw data with other collaborators through web. We tested the proposed HELPNI platform using publicly available 1000 Functional Connectomes dataset including over 1200 subjects. We identified consistent and meaningful functional brain networks across individuals and populations based on resting state fMRI (rsfMRI) big data. Using efficient sampling module, the experimental results demonstrate that our HELPNI system has superior performance than other systems for large-scale fMRI data in terms of processing and storing the data and associated results much faster.
HAFNI-enabled largescale platform for neuroimaging informatics (HELPNI).
Makkie, Milad; Zhao, Shijie; Jiang, Xi; Lv, Jinglei; Zhao, Yu; Ge, Bao; Li, Xiang; Han, Junwei; Liu, Tianming
2015-12-01
Tremendous efforts have thus been devoted on the establishment of functional MRI informatics systems that recruit a comprehensive collection of statistical/computational approaches for fMRI data analysis. However, the state-of-the-art fMRI informatics systems are especially designed for specific fMRI sessions or studies of which the data size is not really big, and thus has difficulty in handling fMRI 'big data.' Given the size of fMRI data are growing explosively recently due to the advancement of neuroimaging technologies, an effective and efficient fMRI informatics system which can process and analyze fMRI big data is much needed. To address this challenge, in this work, we introduce our newly developed informatics platform, namely, 'HAFNI-enabled largescale platform for neuroimaging informatics (HELPNI).' HELPNI implements our recently developed computational framework of sparse representation of whole-brain fMRI signals which is called holistic atlases of functional networks and interactions (HAFNI) for fMRI data analysis. HELPNI provides integrated solutions to archive and process large-scale fMRI data automatically and structurally, to extract and visualize meaningful results information from raw fMRI data, and to share open-access processed and raw data with other collaborators through web. We tested the proposed HELPNI platform using publicly available 1000 Functional Connectomes dataset including over 1200 subjects. We identified consistent and meaningful functional brain networks across individuals and populations based on resting state fMRI (rsfMRI) big data. Using efficient sampling module, the experimental results demonstrate that our HELPNI system has superior performance than other systems for large-scale fMRI data in terms of processing and storing the data and associated results much faster.
Warren, Jeffrey M; Hanson, Paul J; Iversen, Colleen M; Kumar, Jitendra; Walker, Anthony P; Wullschleger, Stan D
2015-01-01
There is wide breadth of root function within ecosystems that should be considered when modeling the terrestrial biosphere. Root structure and function are closely associated with control of plant water and nutrient uptake from the soil, plant carbon (C) assimilation, partitioning and release to the soils, and control of biogeochemical cycles through interactions within the rhizosphere. Root function is extremely dynamic and dependent on internal plant signals, root traits and morphology, and the physical, chemical and biotic soil environment. While plant roots have significant structural and functional plasticity to changing environmental conditions, their dynamics are noticeably absent from the land component of process-based Earth system models used to simulate global biogeochemical cycling. Their dynamic representation in large-scale models should improve model veracity. Here, we describe current root inclusion in models across scales, ranging from mechanistic processes of single roots to parameterized root processes operating at the landscape scale. With this foundation we discuss how existing and future root functional knowledge, new data compilation efforts, and novel modeling platforms can be leveraged to enhance root functionality in large-scale terrestrial biosphere models by improving parameterization within models, and introducing new components such as dynamic root distribution and root functional traits linked to resource extraction. No claim to original US Government works. New Phytologist © 2014 New Phytologist Trust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puerari, Ivânio; Elmegreen, Bruce G.; Block, David L., E-mail: puerari@inaoep.mx
2014-12-01
We examine 8 μm IRAC images of the grand design two-arm spiral galaxies M81 and M51 using a new method whereby pitch angles are locally determined as a function of scale and position, in contrast to traditional Fourier transform spectral analyses which fit to average pitch angles for whole galaxies. The new analysis is based on a correlation between pieces of a galaxy in circular windows of (lnR,θ) space and logarithmic spirals with various pitch angles. The diameter of the windows is varied to study different scales. The result is a best-fit pitch angle to the spiral structure as amore » function of position and scale, or a distribution function of pitch angles as a function of scale for a given galactic region or area. We apply the method to determine the distribution of pitch angles in the arm and interarm regions of these two galaxies. In the arms, the method reproduces the known pitch angles for the main spirals on a large scale, but also shows higher pitch angles on smaller scales resulting from dust feathers. For the interarms, there is a broad distribution of pitch angles representing the continuation and evolution of the spiral arm feathers as the flow moves into the interarm regions. Our method shows a multiplicity of spiral structures on different scales, as expected from gas flow processes in a gravitating, turbulent and shearing interstellar medium. We also present results for M81 using classical 1D and 2D Fourier transforms, together with a new correlation method, which shows good agreement with conventional 2D Fourier transforms.« less
Goh, S. Y. Matthew; Irimia, Andrei; Torgerson, Carinna M.; Horn, John D. Van
2014-01-01
Throughout the past few decades, the ability to treat and rehabilitate traumatic brain injury (TBI) patients has become critically reliant upon the use of neuroimaging to acquire adequate knowledge of injury-related effects upon brain function and recovery. As a result, the need for TBI neuroimaging analysis methods has increased in recent years due to the recognition that spatiotemporal computational analyses of TBI evolution are useful for capturing the effects of TBI dynamics. At the same time, however, the advent of such methods has brought about the need to analyze, manage, and integrate TBI neuroimaging data using informatically inspired approaches which can take full advantage of their large dimensionality and informational complexity. Given this perspective, we here discuss the neuroinformatics challenges for TBI neuroimaging analysis in the context of structural, connectivity, and functional paradigms. Within each of these, the availability of a wide range of neuroimaging modalities can be leveraged to fully understand the heterogeneity of TBI pathology; consequently, large-scale computer hardware resources and next-generation processing software are often required for efficient data storage, management, and analysis of TBI neuroimaging data. However, each of these paradigms poses challenges in the context of informatics such that the ability to address them is critical for augmenting current capabilities to perform neuroimaging analysis of TBI and to improve therapeutic efficacy. PMID:24616696
Analysis of the two-point velocity correlations in turbulent boundary layer flows
NASA Technical Reports Server (NTRS)
Oberlack, M.
1995-01-01
The general objective of the present work is to explore the use of Rapid Distortion Theory (RDT) in analysis of the two-point statistics of the log-layer. RDT is applicable only to unsteady flows where the non-linear turbulence-turbulence interaction can be neglected in comparison to linear turbulence-mean interactions. Here we propose to use RDT to examine the structure of the large energy-containing scales and their interaction with the mean flow in the log-region. The contents of the work are twofold: First, two-point analysis methods will be used to derive the law-of-the-wall for the special case of zero mean pressure gradient. The basic assumptions needed are one-dimensionality in the mean flow and homogeneity of the fluctuations. It will be shown that a formal solution of the two-point correlation equation can be obtained as a power series in the von Karman constant, known to be on the order of 0.4. In the second part, a detailed analysis of the two-point correlation function in the log-layer will be given. The fundamental set of equations and a functional relation for the two-point correlation function will be derived. An asymptotic expansion procedure will be used in the log-layer to match Kolmogorov's universal range and the one-point correlations to the inviscid outer region valid for large correlation distances.
NASA Technical Reports Server (NTRS)
Miles, J. H.
1974-01-01
A rational function is presented for the acoustic spectra generated by deflection of engine exhaust jets for under-the-wing and over-the-wing versions of externally blown flaps. The functional representation is intended to provide a means for compact storage of data and for data analysis. The expressions are based on Fourier transform functions for the Strouhal normalized pressure spectral density, and on a correction for reflection effects based on the N-independent-source model of P. Thomas extended by use of a reflected ray transfer function. Curve fit comparisons are presented for blown flap data taken from turbofan engine tests and from large scale cold-flow model tests. Application of the rational function to scrubbing noise theory is also indicated.
Wang, Lu-Yong; Fasulo, D
2006-01-01
Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.
Spreng, R Nathan; Stevens, W Dale; Viviano, Joseph D; Schacter, Daniel L
2016-09-01
Anticorrelation between the default and dorsal attention networks is a central feature of human functional brain organization. Hallmarks of aging include impaired default network modulation and declining medial temporal lobe (MTL) function. However, it remains unclear if this anticorrelation is preserved into older adulthood during task performance, or how this is related to the intrinsic architecture of the brain. We hypothesized that older adults would show reduced within- and increased between-network functional connectivity (FC) across the default and dorsal attention networks. To test this hypothesis, we examined the effects of aging on task-related and intrinsic FC using functional magnetic resonance imaging during an autobiographical planning task known to engage the default network and during rest, respectively, with young (n = 72) and older (n = 79) participants. The task-related FC analysis revealed reduced anticorrelation with aging. At rest, there was a robust double dissociation, with older adults showing a pattern of reduced within-network FC, but increased between-network FC, across both networks, relative to young adults. Moreover, older adults showed reduced intrinsic resting-state FC of the MTL with both networks suggesting a fractionation of the MTL memory system in healthy aging. These findings demonstrate age-related dedifferentiation among these competitive large-scale networks during both task and rest, consistent with the idea that age-related changes are associated with a breakdown in the intrinsic functional architecture within and among large-scale brain networks. Copyright © 2016 Elsevier Inc. All rights reserved.
Performance Characterization of Global Address Space Applications: A Case Study with NWChem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Jeffrey R.; Krishnamoorthy, Sriram; Shende, Sameer
The use of global address space languages and one-sided communication for complex applications is gaining attention in the parallel computing community. However, lack of good evaluative methods to observe multiple levels of performance makes it difficult to isolate the cause of performance deficiencies and to understand the fundamental limitations of system and application design for future improvement. NWChem is a popular computational chemistry package which depends on the Global Arrays/ ARMCI suite for partitioned global address space functionality to deliver high-end molecular modeling capabilities. A workload characterization methodology was developed to support NWChem performance engineering on large-scale parallel platforms. Themore » research involved both the integration of performance instrumentation and measurement in the NWChem software, as well as the analysis of one-sided communication performance in the context of NWChem workloads. Scaling studies were conducted for NWChem on Blue Gene/P and on two large-scale clusters using different generation Infiniband interconnects and x86 processors. The performance analysis and results show how subtle changes in the runtime parameters related to the communication subsystem could have significant impact on performance behavior. The tool has successfully identified several algorithmic bottlenecks which are already being tackled by computational chemists to improve NWChem performance.« less
NASA Astrophysics Data System (ADS)
Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.
2018-04-01
Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.
Large Eddy Simulation in the Computation of Jet Noise
NASA Technical Reports Server (NTRS)
Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.
1999-01-01
Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Hongyu; Alam, Shadab; Croft, Rupert A. C.; Ho, Shirley; Giusarma, Elena
2017-10-01
Large redshift surveys of galaxies and clusters are providing the first opportunities to search for distortions in the observed pattern of large-scale structure due to such effects as gravitational redshift. We focus on non-linear scales and apply a quasi-Newtonian approach using N-body simulations to predict the small asymmetries in the cross-correlation function of two galaxy different populations. Following recent work by Bonvin et al., Zhao and Peacock and Kaiser on galaxy clusters, we include effects which enter at the same order as gravitational redshift: the transverse Doppler effect, light-cone effects, relativistic beaming, luminosity distance perturbation and wide-angle effects. We find that all these effects cause asymmetries in the cross-correlation functions. Quantifying these asymmetries, we find that the total effect is dominated by the gravitational redshift and luminosity distance perturbation at small and large scales, respectively. By adding additional subresolution modelling of galaxy structure to the large-scale structure information, we find that the signal is significantly increased, indicating that structure on the smallest scales is important and should be included. We report on comparison of our simulation results with measurements from the SDSS/BOSS galaxy redshift survey in a companion paper.
Linear Scaling Density Functional Calculations with Gaussian Orbitals
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.
1999-01-01
Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.
Global Proteomics Analysis of Protein Lysine Methylation.
Cao, Xing-Jun; Garcia, Benjamin A
2016-11-01
Lysine methylation is a common protein post-translational modification dynamically mediated by protein lysine methyltransferases (PKMTs) and protein lysine demethylases (PKDMs). Beyond histone proteins, lysine methylation on non-histone proteins plays a substantial role in a variety of functions in cells and is closely associated with diseases such as cancer. A large body of evidence indicates that the dysregulation of some PKMTs leads to tumorigenesis via their non-histone substrates. However, most studies on other PKMTs have made slow progress owing to the lack of approaches for extensive screening of lysine methylation sites. However, recently, there has been a series of publications to perform large-scale analysis of protein lysine methylation. In this unit, we introduce a protocol for the global analysis of protein lysine methylation in cells by means of immunoaffinity enrichment and mass spectrometry. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.
Structural design of the Large Deployable Reflector (LDR)
NASA Technical Reports Server (NTRS)
Satter, Celeste M.; Lou, Michael C.
1991-01-01
An integrated Large Deployable Reflector (LDR) analysis model was developed to enable studies of system responses to the mechanical and thermal disturbances anticipated during on-orbit operations. Functional requirements of the major subsystems of the LDR are investigated, design trades are conducted, and design options are proposed. System mass and inertia properties are computed in order to estimate environmental disturbances, and in the sizing of control system hardware. Scaled system characteristics are derived for use in evaluating launch capabilities and achievable orbits. It is concluded that a completely passive 20-m primary appears feasible for the LDR from the standpoint of both mechanical vibration and thermal distortions.
Structural design of the Large Deployable Reflector (LDR)
NASA Astrophysics Data System (ADS)
Satter, Celeste M.; Lou, Michael C.
1991-09-01
An integrated Large Deployable Reflector (LDR) analysis model was developed to enable studies of system responses to the mechanical and thermal disturbances anticipated during on-orbit operations. Functional requirements of the major subsystems of the LDR are investigated, design trades are conducted, and design options are proposed. System mass and inertia properties are computed in order to estimate environmental disturbances, and in the sizing of control system hardware. Scaled system characteristics are derived for use in evaluating launch capabilities and achievable orbits. It is concluded that a completely passive 20-m primary appears feasible for the LDR from the standpoint of both mechanical vibration and thermal distortions.
Beam Conditioning and Harmonic Generation in Free ElectronLasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charman, A.E.; Penn, G.; Wolski, A.
2004-07-05
The next generation of large-scale free-electron lasers (FELs) such as Euro-XFEL and LCLS are to be devices which produce coherent X-rays using Self-Amplified Spontaneous Emission (SASE). The performance of these devices is limited by the spread in longitudinal velocities of the beam. In the case where this spread arises primarily from large transverse oscillation amplitudes, beam conditioning can significantly enhance FEL performance. Future X-ray sources may also exploit harmonic generation starting from laser-seeded modulation. Preliminary analysis of such devices is discussed, based on a novel trial-function/variational-principle approach, which shows good agreement with more lengthy numerical simulations.
Fusion and Fission of Cognitive Functions in the Human Parietal Cortex
Humphreys, Gina F.; Lambon Ralph, Matthew A.
2015-01-01
How is higher cognitive function organized in the human parietal cortex? A century of neuropsychology and 30 years of functional neuroimaging has implicated the parietal lobe in many different verbal and nonverbal cognitive domains. There is little clarity, however, on how these functions are organized, that is, where do these functions coalesce (implying a shared, underpinning neurocomputation) and where do they divide (indicating different underlying neural functions). Until now, there has been no multi-domain synthesis in order to reveal where there is fusion or fission of functions in the parietal cortex. This aim was achieved through a large-scale activation likelihood estimation (ALE) analysis of 386 studies (3952 activation peaks) covering 8 cognitive domains. A tripartite, domain-general neuroanatomical division and 5 principles of cognitive organization were established, and these are discussed with respect to a unified theory of parietal functional organization. PMID:25205661
Mate, Kedar S; Ngidi, Wilbroda Hlolisile; Reddy, Jennifer; Mphatswe, Wendy; Rollins, Nigel; Barker, Pierre
2013-11-01
New approaches are needed to evaluate quality improvement (QI) within large-scale public health efforts. This case report details challenges to large-scale QI evaluation, and proposes solutions relying on adaptive study design. We used two sequential evaluative methods to study a QI effort to improve delivery of HIV preventive care in public health facilities in three districts in KwaZulu-Natal, South Africa, over a 3-year period. We initially used a cluster randomised controlled trial (RCT) design. During the RCT study period, tensions arose between intervention implementation and evaluation design due to loss of integrity of the randomisation unit over time, pressure to implement changes across the randomisation unit boundaries, and use of administrative rather than functional structures for the randomisation. In response to this loss of design integrity, we switched to a more flexible intervention design and a mixed-methods quasiexperimental evaluation relying on both a qualitative analysis and an interrupted time series quantitative analysis. Cluster RCT designs may not be optimal for evaluating complex interventions to improve implementation in uncontrolled 'real world' settings. More flexible, context-sensitive evaluation designs offer a better balance of the need to adjust the intervention during the evaluation to meet implementation challenges while providing the data required to evaluate effectiveness. Our case study involved HIV care in a resource-limited setting, but these issues likely apply to complex improvement interventions in other settings.
Zou, Yun; Hu, Li; Tremp, Mathias; Jin, Yunbo; Chen, Hui; Ma, Gang; Lin, Xiaoxi
2018-02-23
The aim of this study was to repair large periorbital cutaneous defects by an innovative technique called PEPSI (periorbital elevation and positioning with secret incisions) technique with functional and aesthetic outcomes. In this retrospective study, unilateral periorbital cutaneous defects in 15 patients were repaired by the PEPSI technique. The ages of patients ranged from 3 to 46 years (average, 19 years). The outcome evaluations included scars (Vancouver Scar Scale and visual analog scale score), function and aesthetic appearance of eyelids, and patient satisfaction. The repair size was measured by the maximum advancement distance of skin flap during operation. All patients achieved an effective repair with a mean follow-up of 18.3 months. Except one with a small (approximately 0.3 cm) necrosis, all patients healed with no complication. The mean Vancouver Scar Scale and visual analog scale scores were 2.1 ± 1.7 and 8.5 ± 1.2, respectively. Ideal cosmetic and functional outcomes were achieved in 14 patients (93.3%). All patients achieved complete satisfaction except 1 patient with partial satisfaction. The mean maximum advancement distance of skin flap was 20.2 mm (range, 8-50 mm). This study demonstrated that the PEPSI technique is an effective method to repair large periorbital cutaneous defects with acceptable functional and aesthetic outcomes.
ERIC Educational Resources Information Center
Töytäri, Aija; Piirainen, Arja; Tynjälä, Päivi; Vanhanen-Nuutinen, Liisa; Mäki, Kimmo; Ilves, Vesa
2016-01-01
In this large-scale study, higher education teachers' descriptions of their own learning were examined with qualitative analysis involving application of principles of phenomenographic research. This study is unique: it is unusual to use large-scale data in qualitative studies. The data were collected through an e-mail survey sent to 5960 teachers…
Sale, Martin V.; Lord, Anton; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B.
2015-01-01
Normal brain function depends on a dynamic balance between local specialization and large-scale integration. It remains unclear, however, how local changes in functionally specialized areas can influence integrated activity across larger brain networks. By combining transcranial magnetic stimulation with resting-state functional magnetic resonance imaging, we tested for changes in large-scale integration following the application of excitatory or inhibitory stimulation on the human motor cortex. After local inhibitory stimulation, regions encompassing the sensorimotor module concurrently increased their internal integration and decreased their communication with other modules of the brain. There were no such changes in modular dynamics following excitatory stimulation of the same area of motor cortex nor were there changes in the configuration and interactions between core brain hubs after excitatory or inhibitory stimulation of the same area. These results suggest the existence of selective mechanisms that integrate local changes in neural activity, while preserving ongoing communication between brain hubs. PMID:25717162
Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase
Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele
2015-01-01
Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins. PMID:25672826
Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase
NASA Astrophysics Data System (ADS)
Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele
2015-02-01
Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins.
NASA Astrophysics Data System (ADS)
Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.
2017-12-01
Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.
Visualisation and graph-theoretic analysis of a large-scale protein structural interactome
Bolser, Dan; Dafas, Panos; Harrington, Richard; Park, Jong; Schroeder, Michael
2003-01-01
Background Large-scale protein interaction maps provide a new, global perspective with which to analyse protein function. PSIMAP, the Protein Structural Interactome Map, is a database of all the structurally observed interactions between superfamilies of protein domains with known three-dimensional structure in the PDB. PSIMAP incorporates both functional and evolutionary information into a single network. Results We present a global analysis of PSIMAP using several distinct network measures relating to centrality, interactivity, fault-tolerance, and taxonomic diversity. We found the following results: Centrality: we show that the center and barycenter of PSIMAP do not coincide, and that the superfamilies forming the barycenter relate to very general functions, while those constituting the center relate to enzymatic activity. Interactivity: we identify the P-loop and immunoglobulin superfamilies as the most highly interactive. We successfully use connectivity and cluster index, which characterise the connectivity of a superfamily's neighbourhood, to discover superfamilies of complex I and II. This is particularly significant as the structure of complex I is not yet solved. Taxonomic diversity: we found that highly interactive superfamilies are in general taxonomically very diverse and are thus amongst the oldest. Fault-tolerance: we found that the network is very robust as for the majority of superfamilies removal from the network will not break up the network. Conclusions Overall, we can single out the P-loop containing nucleotide triphosphate hydrolases superfamily as it is the most highly connected and has the highest taxonomic diversity. In addition, this superfamily has the highest interaction rank, is the barycenter of the network (it has the shortest average path to every other superfamily in the network), and is an articulation vertex, whose removal will disconnect the network. More generally, we conclude that the graph-theoretic and taxonomic analysis of PSIMAP is an important step towards the understanding of protein function and could be an important tool for tracing the evolution of life at the molecular level. PMID:14531933
NASA Astrophysics Data System (ADS)
Thorslund, J.; Jarsjo, J.; Destouni, G.
2017-12-01
The quality of freshwater resources is increasingly impacted by human activities. Humans also extensively change the structure of landscapes, which may alter natural hydrological processes. To manage and maintain freshwater of good water quality, it is critical to understand how pollutants are released into, transported and transformed within the hydrological system. Some key scientific questions include: What are net downstream impacts of pollutants across different hydroclimatic and human disturbance conditions, and on different scales? What are the functions within and between components of the landscape, such as wetlands, on mitigating pollutant load delivery to downstream recipients? We explore these questions by synthesizing results from several relevant case study examples of intensely human-impacted hydrological systems. These case study sites have been specifically evaluated in terms of net impact of human activities on pollutant input to the aquatic system, as well as flow-path distributions trough wetlands as a potential ecosystem service of pollutant mitigation. Results shows that although individual wetlands have high retention capacity, efficient net retention effects were not always achieved at a larger landscape scale. Evidence suggests that the function of wetlands as mitigation solutions to pollutant loads is largely controlled by large-scale parallel and circular flow-paths, through which multiple wetlands are interconnected in the landscape. To achieve net mitigation effects at large scale, a large fraction of the polluted large-scale flows must be transported through multiple connected wetlands. Although such large-scale flow interactions are critical for assessing water pollution spreading and fate through the landscape, our synthesis shows a frequent lack of knowledge at such scales. We suggest ways forward for addressing the mismatch between the large scales at which key pollutant pressures and water quality changes take place and the relatively scale at which most studies and implementations are currently made. These suggestions can help bridge critical knowledge gaps, as needed for improving water quality predictions and mitigation solutions under human and environmental changes.
Deckel, A W; Hesselbrock, V; Bauer, L
1995-04-01
This experiment examined the relationship between anterior brain functioning and alcohol-related expectancies. Ninety-one young men at risk for developing alcoholism were assessed on the Alcohol Expectancy Questionnaire (AEQ) and administered neuropsychological and EEG tests. Three of the scales on the AEQ, including the "Enhanced Sexual Functioning" scale, the "Increased Social Assertiveness" scale, and items from the "Global/Positive Change scale," were used, because each of these scales has been found to discriminate alcohol-based expectancies adequately by at least two separate sets of investigators. Regression analysis found that anterior neuropsychological tests (including the Wisconsin Card Sorting test, the Porteus Maze test, the Controlled Oral Word Fluency test, and the Luria-Nebraska motor functioning tests) were predictive of the AEQ scale scores on regression analysis. One of the AEQ scales, "Enhanced Sexual Functioning," was also predicted by WAIS-R-Verbal scales, whereas the "Global/Positive" AEQ scale was predicted by the WAIS-R Performance scales. Regression analysis using EEG power as predictors found that left versus right hemisphere "difference" scores obtained from frontal EEG leads were predictive of the three AEQ scales. Conversely, parietal EEG power did not significantly predict any of the expectancy scales. It is concluded that anterior brain any of the expectancy scales. It is concluded that anterior brain functioning is associated with alcohol-related expectancies. These findings suggest that alcohol-related expectancy may be, in part, biologically determined by frontal/prefrontal systems, and that dysfunctioning in these systems may serve as a risk factor for the development of alcohol-related behaviors.
Assessment of protein set coherence using functional annotations
Chagoyen, Monica; Carazo, Jose M; Pascual-Montano, Alberto
2008-01-01
Background Analysis of large-scale experimental datasets frequently produces one or more sets of proteins that are subsequently mined for functional interpretation and validation. To this end, a number of computational methods have been devised that rely on the analysis of functional annotations. Although current methods provide valuable information (e.g. significantly enriched annotations, pairwise functional similarities), they do not specifically measure the degree of homogeneity of a protein set. Results In this work we present a method that scores the degree of functional homogeneity, or coherence, of a set of proteins on the basis of the global similarity of their functional annotations. The method uses statistical hypothesis testing to assess the significance of the set in the context of the functional space of a reference set. As such, it can be used as a first step in the validation of sets expected to be homogeneous prior to further functional interpretation. Conclusion We evaluate our method by analysing known biologically relevant sets as well as random ones. The known relevant sets comprise macromolecular complexes, cellular components and pathways described for Saccharomyces cerevisiae, which are mostly significantly coherent. Finally, we illustrate the usefulness of our approach for validating 'functional modules' obtained from computational analysis of protein-protein interaction networks. Matlab code and supplementary data are available at PMID:18937846
NASA Astrophysics Data System (ADS)
Wainwright, Charlotte E.; Bonin, Timothy A.; Chilson, Phillip B.; Gibbs, Jeremy A.; Fedorovich, Evgeni; Palmer, Robert D.
2015-05-01
Small-scale turbulent fluctuations of temperature are known to affect the propagation of both electromagnetic and acoustic waves. Within the inertial-subrange scale, where the turbulence is locally homogeneous and isotropic, these temperature perturbations can be described, in a statistical sense, using the structure-function parameter for temperature, . Here we investigate different methods of evaluating , using data from a numerical large-eddy simulation together with atmospheric observations collected by an unmanned aerial system and a sodar. An example case using data from a late afternoon unmanned aerial system flight on April 24 2013 and corresponding large-eddy simulation data is presented and discussed.
Efficient analysis of large-scale genome-wide data with two R packages: bigstatsr and bigsnpr.
Privé, Florian; Aschard, Hugues; Ziyatdinov, Andrey; Blum, Michael G B
2017-03-30
Genome-wide datasets produced for association studies have dramatically increased in size over the past few years, with modern datasets commonly including millions of variants measured in dozens of thousands of individuals. This increase in data size is a major challenge severely slowing down genomic analyses, leading to some software becoming obsolete and researchers having limited access to diverse analysis tools. Here we present two R packages, bigstatsr and bigsnpr, allowing for the analysis of large scale genomic data to be performed within R. To address large data size, the packages use memory-mapping for accessing data matrices stored on disk instead of in RAM. To perform data pre-processing and data analysis, the packages integrate most of the tools that are commonly used, either through transparent system calls to existing software, or through updated or improved implementation of existing methods. In particular, the packages implement fast and accurate computations of principal component analysis and association studies, functions to remove SNPs in linkage disequilibrium and algorithms to learn polygenic risk scores on millions of SNPs. We illustrate applications of the two R packages by analyzing a case-control genomic dataset for celiac disease, performing an association study and computing Polygenic Risk Scores. Finally, we demonstrate the scalability of the R packages by analyzing a simulated genome-wide dataset including 500,000 individuals and 1 million markers on a single desktop computer. https://privefl.github.io/bigstatsr/ & https://privefl.github.io/bigsnpr/. florian.prive@univ-grenoble-alpes.fr & michael.blum@univ-grenoble-alpes.fr. Supplementary materials are available at Bioinformatics online.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Newton Methods for Large Scale Problems in Machine Learning
ERIC Educational Resources Information Center
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
ERIC Educational Resources Information Center
Wendt, Heike; Bos, Wilfried; Goy, Martin
2011-01-01
Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…
The large scale microelectronics Computer-Aided Design and Test (CADAT) system
NASA Technical Reports Server (NTRS)
Gould, J. M.
1978-01-01
The CADAT system consists of a number of computer programs written in FORTRAN that provide the capability to simulate, lay out, analyze, and create the artwork for large scale microelectronics. The function of each software component of the system is described with references to specific documentation for each software component.
NASA Astrophysics Data System (ADS)
Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya
2017-04-01
Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.