BactoGeNIE: A large-scale comparative genome visualization for big displays
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...
2015-08-13
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
BactoGeNIE: a large-scale comparative genome visualization for big displays
2015-01-01
Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021
BactoGeNIE: A large-scale comparative genome visualization for big displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment
ERIC Educational Resources Information Center
Cui, Ying; Mousavi, Amin
2015-01-01
The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Field-aligned currents' scale analysis performed with the Swarm constellation
NASA Astrophysics Data System (ADS)
Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter
2015-01-01
We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.
Thakur, Shalabh; Guttman, David S
2016-06-30
Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .
Rotation invariant fast features for large-scale recognition
NASA Astrophysics Data System (ADS)
Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd
2012-10-01
We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.
Malucelli, Emil; Procopio, Alessandra; Fratini, Michela; Gianoncelli, Alessandra; Notargiacomo, Andrea; Merolle, Lucia; Sargenti, Azzurra; Castiglioni, Sara; Cappadone, Concettina; Farruggia, Giovanna; Lombardo, Marco; Lagomarsino, Stefano; Maier, Jeanette A; Iotti, Stefano
2018-01-01
The quantification of elemental concentration in cells is usually performed by analytical assays on large populations missing peculiar but important rare cells. The present article aims at comparing the elemental quantification in single cells and cell population in three different cell types using a new approach for single cells elemental analysis performed at sub-micrometer scale combining X-ray fluorescence microscopy and atomic force microscopy. The attention is focused on the light element Mg, exploiting the opportunity to compare the single cell quantification to the cell population analysis carried out by a highly Mg-selective fluorescent chemosensor. The results show that the single cell analysis reveals the same Mg differences found in large population of the different cell strains studied. However, in one of the cell strains, single cell analysis reveals two cells with an exceptionally high intracellular Mg content compared with the other cells of the same strain. The single cell analysis allows mapping Mg and other light elements in whole cells at sub-micrometer scale. A detailed intensity correlation analysis on the two cells with the highest Mg content reveals that Mg subcellular localization correlates with oxygen in a different fashion with respect the other sister cells of the same strain. Graphical abstract Single cells or large population analysis this is the question!
Multiscale recurrence quantification analysis of order recurrence plots
NASA Astrophysics Data System (ADS)
Xu, Mengjia; Shang, Pengjian; Lin, Aijing
2017-03-01
In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.
The Use of Weighted Graphs for Large-Scale Genome Analysis
Zhou, Fang; Toivonen, Hannu; King, Ross D.
2014-01-01
There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
A large-scale perspective on stress-induced alterations in resting-state networks
NASA Astrophysics Data System (ADS)
Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron
2016-02-01
Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.
Evaluation of Large-Scale Public-Sector Reforms: A Comparative Analysis
ERIC Educational Resources Information Center
Breidahl, Karen N.; Gjelstrup, Gunnar; Hansen, Hanne Foss; Hansen, Morten Balle
2017-01-01
Research on the evaluation of large-scale public-sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance since the impact of such reforms is considerable and they change the context in which evaluations of other and more delimited policy areas take place. In our…
Franklin, Jessica M; Rassen, Jeremy A; Bartels, Dorothee B; Schneeweiss, Sebastian
2014-01-01
Nonrandomized safety and effectiveness studies are often initiated immediately after the approval of a new medication, but patients prescribed the new medication during this period may be substantially different from those receiving an existing comparator treatment. Restricting the study to comparable patients after data have been collected is inefficient in prospective studies with primary collection of outcomes. We discuss design and methods for evaluating covariate data to assess the comparability of treatment groups, identify patient subgroups that are not comparable, and decide when to transition to a large-scale comparative study. We demonstrate methods in an example study comparing Cox-2 inhibitors during their postmarketing period (1999-2005) with nonselective nonsteroidal anti-inflammatory drugs (NSAIDs). Graphical checks of propensity score distributions in each treatment group showed substantial problems with overlap in the initial cohorts. In the first half of 1999, >40% of patients were in the region of nonoverlap on the propensity score, and across the study period this fraction never dropped below 10% (the a priori decision threshold for transitioning to the large-scale study). After restricting to patients with no prior NSAID use, <1% of patients were in the region of nonoverlap, indicating that a large-scale study could be initiated in this subgroup and few patients would need to be trimmed from analysis. A sequential study design that uses pilot data to evaluate treatment selection can guide the efficient design of large-scale outcome studies with primary data collection by focusing on comparable patients.
Beaglehole, Ben; Frampton, Chris M; Boden, Joseph M; Mulder, Roger T; Bell, Caroline J
2017-11-01
Following the onset of the Canterbury, New Zealand earthquakes, there were widespread concerns that mental health services were under severe strain as a result of adverse consequences on mental health. We therefore examined Health of the Nation Outcome Scales data to see whether this could inform our understanding of the impact of the Canterbury earthquakes on patients attending local specialist mental health services. Health of the Nation Outcome Scales admission data were analysed for Canterbury mental health services prior to and following the Canterbury earthquakes. These findings were compared to Health of the Nation Outcome Scales admission data from seven other large District Health Boards to delineate local from national trends. Percentage changes in admission numbers were also calculated before and after the earthquakes for Canterbury and the seven other large district health boards. Admission Health of the Nation Outcome Scales scores in Canterbury increased after the earthquakes for adult inpatient and community services, old age inpatient and community services, and Child and Adolescent inpatient services compared to the seven other large district health boards. Admission Health of the Nation Outcome Scales scores for Child and Adolescent community services did not change significantly, while admission Health of the Nation Outcome Scales scores for Alcohol and Drug services in Canterbury fell compared to other large district health boards. Subscale analysis showed that the majority of Health of the Nation Outcome Scales subscales contributed to the overall increases found. Percentage changes in admission numbers for the Canterbury District Health Board and the seven other large district health boards before and after the earthquakes were largely comparable with the exception of admissions to inpatient services for the group aged 4-17 years which showed a large increase. The Canterbury earthquakes were followed by an increase in Health of the Nation Outcome Scales scores for attendees of local mental health services compared to other large district health boards. This suggests that patients presented with greater degrees of psychiatric distress, social disruption, behavioural change and impairment as a result of the earthquakes.
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Wu, Changshan
2013-12-01
Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.
Mackey, Aaron J; Pearson, William R
2004-10-01
Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.
Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar
2017-09-01
Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
Yagnik, Gargey B.; Hansen, Rebecca L.; Korte, Andrew R.; ...
2016-08-30
Nanoparticles (NPs) have been suggested as efficient matrixes for small molecule profiling and imaging by laser-desorption ionization mass spectrometry (LDI-MS), but so far there has been no systematic study comparing different NPs in the analysis of various classes of small molecules. Here, we present a large scale screening of 13 NPs for the analysis of two dozen small metabolite molecules. Many NPs showed much higher LDI efficiency than organic matrixes in positive mode and some NPs showed comparable efficiencies for selected analytes in negative mode. Our results suggest that a thermally driven desorption process is a key factor for metalmore » oxide NPs, but chemical interactions are also very important, especially for other NPs. Furthermore, the screening results provide a useful guideline for the selection of NPs in the LDI-MS analysis of small molecules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yagnik, Gargey B.; Hansen, Rebecca L.; Korte, Andrew R.
Nanoparticles (NPs) have been suggested as efficient matrixes for small molecule profiling and imaging by laser-desorption ionization mass spectrometry (LDI-MS), but so far there has been no systematic study comparing different NPs in the analysis of various classes of small molecules. Here, we present a large scale screening of 13 NPs for the analysis of two dozen small metabolite molecules. Many NPs showed much higher LDI efficiency than organic matrixes in positive mode and some NPs showed comparable efficiencies for selected analytes in negative mode. Our results suggest that a thermally driven desorption process is a key factor for metalmore » oxide NPs, but chemical interactions are also very important, especially for other NPs. Furthermore, the screening results provide a useful guideline for the selection of NPs in the LDI-MS analysis of small molecules.« less
Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J
2014-01-01
Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.
NASA Astrophysics Data System (ADS)
Harris, B.; McDougall, K.; Barry, M.
2012-07-01
Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.
2014-08-10
Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
Haile, Sarah R; Guerra, Beniamino; Soriano, Joan B; Puhan, Milo A
2017-12-21
Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC) which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD) arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties of clinical scores. Our large-scale external validation indicates that the scores with the best discriminative properties to predict 3 year mortality in patients with COPD are ADO and eBODE.
Cache Coherence Protocols for Large-Scale Multiprocessors
1990-09-01
and is compared with the other protocols for large-scale machines. In later analysis, this coherence method is designated by the acronym OCPD , which...private read misses 2 6 6 ( OCPD ) private write misses 2 6 6 Table 4.2: Transaction Types and Costs. the performance of the memory system. These...methodologies. Figure 4-2 shows the processor utiliza- tions of the Weather program, with special code in the dyn-nic post-mortem sched- 94 OCPD DlrINB
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
Effects of large-scale wind driven turbulence on sound propagation
NASA Technical Reports Server (NTRS)
Noble, John M.; Bass, Henry E.; Raspet, Richard
1990-01-01
Acoustic measurements made in the atmosphere have shown significant fluctuations in amplitude and phase resulting from the interaction with time varying meteorological conditions. The observed variations appear to have short term and long term (1 to 5 minutes) variations at least in the phase of the acoustic signal. One possible way to account for this long term variation is the use of a large scale wind driven turbulence model. From a Fourier analysis of the phase variations, the outer scales for the large scale turbulence is 200 meters and greater, which corresponds to turbulence in the energy-containing subrange. The large scale turbulence is assumed to be elongated longitudinal vortex pairs roughly aligned with the mean wind. Due to the size of the vortex pair compared to the scale of the present experiment, the effect of the vortex pair on the acoustic field can be modeled as the sound speed of the atmosphere varying with time. The model provides results with the same trends and variations in phase observed experimentally.
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
2016-07-06
Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less
ERIC Educational Resources Information Center
Chudagr, Amita; Luschei, Thomas F.
2016-01-01
The objective of this commentary is to call attention to the feasibility and importance of large-scale, systematic, quantitative analysis in international and comparative education research. We contend that although many existing databases are under- or unutilized in quantitative international-comparative research, these resources present the…
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
TARGET Publication Guidelines | Office of Cancer Genomics
Like other NCI large-scale genomics initiatives, TARGET is a community resource project and data are made available rapidly after validation for use by other researchers. To act in accord with the Fort Lauderdale principles and support the continued prompt public release of large-scale genomic data prior to publication, researchers who plan to prepare manuscripts containing descriptions of TARGET pediatric cancer data that would be of comparable scope to an initial TARGET disease-specific comprehensive, global analysis publication, and journal editors who receive such manuscripts, are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
The large-scale organization of metabolic networks
NASA Astrophysics Data System (ADS)
Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z. N.; Barabási, A.-L.
2000-10-01
In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.
Major soybean maturity gene haplotypes revealed by SNPViz analysis of 72 sequenced soybean genomes
USDA-ARS?s Scientific Manuscript database
In this Genomics Era, vast amounts of next generation sequencing data have become publicly-available for multiple genomes across hundreds of species. Analysis of these large-scale datasets can become cumbersome, especially when comparing nucleotide polymorphisms across many samples within a dataset...
Spatio-temporal hierarchy in the dynamics of a minimalist protein model
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Baba, Akinori; Li, Chun-Biu; Straub, John E.; Toda, Mikito; Komatsuzaki, Tamiki; Berry, R. Stephen
2013-12-01
A method for time series analysis of molecular dynamics simulation of a protein is presented. In this approach, wavelet analysis and principal component analysis are combined to decompose the spatio-temporal protein dynamics into contributions from a hierarchy of different time and space scales. Unlike the conventional Fourier-based approaches, the time-localized wavelet basis captures the vibrational energy transfers among the collective motions of proteins. As an illustrative vehicle, we have applied our method to a coarse-grained minimalist protein model. During the folding and unfolding transitions of the protein, vibrational energy transfers between the fast and slow time scales were observed among the large-amplitude collective coordinates while the other small-amplitude motions are regarded as thermal noise. Analysis employing a Gaussian-based measure revealed that the time scales of the energy redistribution in the subspace spanned by such large-amplitude collective coordinates are slow compared to the other small-amplitude coordinates. Future prospects of the method are discussed in detail.
Tropospheric transport differences between models using the same large-scale meteorological fields
NASA Astrophysics Data System (ADS)
Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.
2017-01-01
The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.
On identifying relationships between the flood scaling exponent and basin attributes.
Medhi, Hemanta; Tripathi, Shivam
2015-07-01
Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.
Hunt, Geoffrey; Moloney, Molly; Fazio, Adam
2012-01-01
Qualitative research is often conceptualized as inherently small-scale research, primarily conducted by a lone researcher enmeshed in extensive and long-term fieldwork or involving in-depth interviews with a small sample of 20 to 30 participants. In the study of illicit drugs, traditionally this has often been in the form of ethnographies of drug-using subcultures. Such small-scale projects have produced important interpretive scholarship that focuses on the culture and meaning of drug use in situated, embodied contexts. Larger-scale projects are often assumed to be solely the domain of quantitative researchers, using formalistic survey methods and descriptive or explanatory models. In this paper, however, we will discuss qualitative research done on a comparatively larger scale—with in-depth qualitative interviews with hundreds of young drug users. Although this work incorporates some quantitative elements into the design, data collection, and analysis, the qualitative dimension and approach has nevertheless remained central. Larger-scale qualitative research shares some of the challenges and promises of smaller-scale qualitative work including understanding drug consumption from an emic perspective, locating hard-to-reach populations, developing rapport with respondents, generating thick descriptions and a rich analysis, and examining the wider socio-cultural context as a central feature. However, there are additional challenges specific to the scale of qualitative research, which include data management, data overload and problems of handling large-scale data sets, time constraints in coding and analyzing data, and personnel issues including training, organizing and mentoring large research teams. Yet large samples can prove to be essential for enabling researchers to conduct comparative research, whether that be cross-national research within a wider European perspective undertaken by different teams or cross-cultural research looking at internal divisions and differences within diverse communities and cultures. PMID:22308079
Li, Zhijin; Vogelmann, Andrew M.; Feng, Sha; ...
2015-01-20
We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint Statistical Interpolation system is implemented in a multiscale data assimilation (MS-DA) framework that is used within the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. The MS-DA algorithm uses existing reanalysis products and constrains fine-scale atmospheric properties by assimilating high-resolution observations. A set of experiments show that the data assimilation analysis realistically reproduces the intensity, structure, and time evolution of clouds and precipitation associated with a mesoscale convective system.more » Evaluations also show that the large-scale forcing derived from the fine-resolution analysis has an overall accuracy comparable to the existing ARM operational product. For enhanced applications, the fine-resolution fields are used to characterize the contribution of subgrid variability to the large-scale forcing and to derive hydrometeor forcing, which are presented in companion papers.« less
NASA Astrophysics Data System (ADS)
Brugger, Peter; Katul, Gabriel G.; De Roo, Frederik; Kröniger, Konstantin; Rotenberg, Eyal; Rohatyn, Shani; Mauder, Matthias
2018-05-01
Anisotropy in the turbulent stress tensor, which forms the basis of invariant analysis, is conducted using velocity time series measurements collected in the canopy sublayer (CSL) and the atmospheric surface layer (ASL). The goal is to assess how thermal stratification and surface roughness conditions simultaneously distort the scalewise relaxation towards isotropic state from large to small scales when referenced to homogeneous turbulence. To achieve this goal, conventional invariant analysis is extended to allow scalewise information about relaxation to isotropy in physical (instead of Fourier) space to be incorporated. The proposed analysis shows that the CSL is more isotropic than its ASL counterpart at large, intermediate, and small (or inertial) scales irrespective of the thermal stratification. Moreover, the small (or inertial) scale anisotropy is more prevalent in the ASL when compared to the CSL, a finding that cannot be fully explained by the intensity of the mean velocity gradient acting on all scales. Implications to the validity of scalewise Rotta and Lumley models for return to isotropy as well as advantages to using barycentric instead of anisotropy invariant maps for such scalewise analysis are discussed.
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP; Soares, Thereza A.
2007-12-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP
2008-03-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
paraGSEA: a scalable approach for large-scale gene expression profiling
Peng, Shaoliang; Yang, Shunyun
2017-01-01
Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
ERIC Educational Resources Information Center
Burstein, Leigh
Two specific methods of analysis in large-scale evaluations are considered: structural equation modeling and selection modeling/analysis of non-equivalent control group designs. Their utility in large-scale educational program evaluation is discussed. The examination of these methodological developments indicates how people (evaluators,…
Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System
NASA Astrophysics Data System (ADS)
Yue, Liu; Hang, Mend
2018-01-01
With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.
Michael R. Saunders; Justin E. Arseneault
2013-01-01
In long-term, large-scale forest management studies, documentation of pre-treatment differences among and variability within experimental units is critical for drawing the proper inferences from imposed treatments. We compared pre-treatment overstory and large shrub communities (diameters at breast height >1.5 cm) for the 9 research cores with the Hardwood Ecosystem...
2013-01-01
The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial is a large-scale research effort conducted by the National Cancer Institute. PLCO offers an example of coordinated research by both the extramural and intramural communities of the National Institutes of Health. The purpose of this article is to describe the PLCO research resource and how it is managed and to assess the productivity and the costs associated with this resource. Such an in-depth analysis of a single large-scale project can shed light on questions such as how large-scale projects should be managed, what metrics should be used to assess productivity, and how costs can be compared with productivity metrics. A comprehensive publication analysis identified 335 primary research publications resulting from research using PLCO data and biospecimens from 2000 to 2012. By the end of 2012, a total of 9679 citations (excluding self-citations) have resulted from this body of research publications, with an average of 29.7 citations per article, and an h index of 45, which is comparable with other large-scale studies, such as the Nurses’ Health Study. In terms of impact on public health, PLCO trial results have been used by the US Preventive Services Task Force in making recommendations concerning prostate and ovarian cancer screening. The overall cost of PLCO was $454 million over 20 years, adjusted to 2011 dollars, with approximately $37 million for the collection, processing, and storage of biospecimens, including blood samples, buccal cells, and pathology tissues. PMID:24115361
Similarity spectra analysis of high-performance jet aircraft noise.
Neilsen, Tracianne B; Gee, Kent L; Wall, Alan T; James, Michael M
2013-04-01
Noise measured in the vicinity of an F-22A Raptor has been compared to similarity spectra found previously to represent mixing noise from large-scale and fine-scale turbulent structures in laboratory-scale jet plumes. Comparisons have been made for three engine conditions using ground-based sideline microphones, which covered a large angular aperture. Even though the nozzle geometry is complex and the jet is nonideally expanded, the similarity spectra do agree with large portions of the measured spectra. Toward the sideline, the fine-scale similarity spectrum is used, while the large-scale similarity spectrum provides a good fit to the area of maximum radiation. Combinations of the two similarity spectra are shown to match the data in between those regions. Surprisingly, a combination of the two is also shown to match the data at the farthest aft angle. However, at high frequencies the degree of congruity between the similarity and the measured spectra changes with engine condition and angle. At the higher engine conditions, there is a systematically shallower measured high-frequency slope, with the largest discrepancy occurring in the regions of maximum radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, M.J.; Bourke, W.; Browning, G.L.
The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less
Scaling effects in the impact response of graphite-epoxy composite beams
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.
1989-01-01
In support of crashworthiness studies on composite airframes and substructure, an experimental and analytical study was conducted to characterize size effects in the large deflection response of scale model graphite-epoxy beams subjected to impact. Scale model beams of 1/2, 2/3, 3/4, 5/6, and full scale were constructed of four different laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic. The beam specimens were subjected to eccentric axial impact loads which were scaled to provide homologous beam responses. Comparisons of the load and strain time histories between the scale model beams and the prototype should verify the scale law and demonstrate the use of scale model testing for determining impact behavior of composite structures. The nonlinear structural analysis finite element program DYCAST (DYnamic Crash Analysis of STructures) was used to model the beam response. DYCAST analysis predictions of beam strain response are compared to experimental data and the results are presented.
A new method of presentation the large-scale magnetic field structure on the Sun and solar corona
NASA Technical Reports Server (NTRS)
Ponyavin, D. I.
1995-01-01
The large-scale photospheric magnetic field, measured at Stanford, has been analyzed in terms of surface harmonics. Changes of the photospheric field which occur within whole solar rotation period can be resolved by this analysis. For this reason we used daily magnetograms of the line-of-sight magnetic field component observed from Earth over solar disc. We have estimated the period during which day-to-day full disc magnetograms must be collected. An original algorithm was applied to resolve time variations of spherical harmonics that reflect time evolution of large-scale magnetic field within solar rotation period. This method of magnetic field presentation can be useful enough in lack of direct magnetograph observations due to sometimes bad weather conditions. We have used the calculated surface harmonics to reconstruct the large-scale magnetic field structure on the source surface near the sun - the origin of heliospheric current sheet and solar wind streams. The obtained results have been compared with spacecraft in situ observations and geomagnetic activity. We tried to show that proposed technique can trace shon-time variations of heliospheric current sheet and short-lived solar wind streams. We have compared also our results with those obtained traditionally from potential field approximation and extrapolation using synoptic charts as initial boundary conditions.
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)
1993-01-01
Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.
Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.
2016-01-01
Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624
Experimental investigation of large-scale vortices in a freely spreading gravity current
NASA Astrophysics Data System (ADS)
Yuan, Yeping; Horner-Devine, Alexander R.
2017-10-01
A series of laboratory experiments are presented to compare the dynamics of constant-source buoyant gravity currents propagating into laterally confined (channelized) and unconfined (spreading) environments. The plan-form structure of the spreading current and the vertical density and velocity structures on the interface are quantified using the optical thickness method and a combined particle image velocimetry and planar laser-induced fluorescence method, respectively. With lateral boundaries, the buoyant current thickness is approximately constant and Kelvin-Helmholtz instabilities are generated within the shear layer. The buoyant current structure is significantly different in the spreading case. As the current spreads laterally, nonlinear large-scale vortex structures are observed at the interface, which maintain a coherent shape as they propagate away from the source. These structures are continuously generated near the river mouth, have amplitudes close to the buoyant layer thickness, and propagate offshore at speeds approximately equal to the internal wave speed. The observed depth and propagation speed of the instabilities match well with the fastest growing mode predicted by linear stability analysis, but with a shorter wavelength. The spreading flows have much higher vorticity, which is aggregated within the large-scale structures. Secondary instabilities are generated on the leading edge of the braids between the large-scale vortex structures and ultimately break and mix on the lee side of the structures. Analysis of the vortex dynamics shows that lateral stretching intensifies the vorticity in the spreading currents, contributing to higher vorticity within the large-scale structures in the buoyant plume. The large-scale instabilities and vortex structures observed in the present study provide new insights into the origin of internal frontal structures frequently observed in coastal river plumes.
Translational bioinformatics in the cloud: an affordable alternative
2010-01-01
With the continued exponential expansion of publicly available genomic data and access to low-cost, high-throughput molecular technologies for profiling patient populations, computational technologies and informatics are becoming vital considerations in genomic medicine. Although cloud computing technology is being heralded as a key enabling technology for the future of genomic research, available case studies are limited to applications in the domain of high-throughput sequence data analysis. The goal of this study was to evaluate the computational and economic characteristics of cloud computing in performing a large-scale data integration and analysis representative of research problems in genomic medicine. We find that the cloud-based analysis compares favorably in both performance and cost in comparison to a local computational cluster, suggesting that cloud computing technologies might be a viable resource for facilitating large-scale translational research in genomic medicine. PMID:20691073
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
NASA Astrophysics Data System (ADS)
Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya
2017-04-01
Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
ERIC Educational Resources Information Center
Wilkin, John P.
2017-01-01
The 1961 Copyright Office study on renewals, authored by Barbara Ringer, has cast an outsized influence on discussions of the U.S. 1923-1963 public domain. As more concrete data emerge from initiatives such as the large-scale determination process in the Copyright Review Management System (CRMS) project, questions are raised about the reliability…
ERIC Educational Resources Information Center
Oliveri, Maria Elena; von Davier, Matthias
2014-01-01
In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…
Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme
NASA Astrophysics Data System (ADS)
Veljović, K.; Rajković, B.; Mesinger, F.
2009-04-01
Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat limited in view of the integrations having being done only for 10-day forecasts. Even so, one should note that they are among very few done using forecast as opposed to reanalysis or analysis global driving data. Our results suggest that (1) running the Eta as an RCM no significant loss of large-scale kinetic energy with time seems to be taking place; (2) no disadvantage from using the Eta LBC scheme compared to the relaxation scheme is seen, while enjoying the advantage of the scheme being significantly less demanding than the relaxation given that it needs driver model fields at the outermost domain boundary only; and (3) the Eta RCM skill in forecasting large scales, with no large scale nudging, seems to be just about the same as that of the driver model, or, in the terminology of Castro et al., the Eta RCM does not lose "value of the large scale" which exists in the larger global analyses used for the initial condition and for verification.
mySyntenyPortal: an application package to construct websites for synteny block analysis.
Lee, Jongin; Lee, Daehwan; Sim, Mikang; Kwon, Daehong; Kim, Juyeon; Ko, Younhee; Kim, Jaebum
2018-06-05
Advances in sequencing technologies have facilitated large-scale comparative genomics based on whole genome sequencing. Constructing and investigating conserved genomic regions among multiple species (called synteny blocks) are essential in the comparative genomics. However, they require significant amounts of computational resources and time in addition to bioinformatics skills. Many web interfaces have been developed to make such tasks easier. However, these web interfaces cannot be customized for users who want to use their own set of genome sequences or definition of synteny blocks. To resolve this limitation, we present mySyntenyPortal, a stand-alone application package to construct websites for synteny block analyses by using users' own genome data. mySyntenyPortal provides both command line and web-based interfaces to build and manage websites for large-scale comparative genomic analyses. The websites can be also easily published and accessed by other users. To demonstrate the usability of mySyntenyPortal, we present an example study for building websites to compare genomes of three mammalian species (human, mouse, and cow) and show how they can be easily utilized to identify potential genes affected by genome rearrangements. mySyntenyPortal will contribute for extended comparative genomic analyses based on large-scale whole genome sequences by providing unique functionality to support the easy creation of interactive websites for synteny block analyses from user's own genome data.
Supersonic Coaxial Jets: Noise Predictions and Measurements
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Papamoschou, Dimitri; Hixon, Ray
1998-01-01
The noise from perfectly expanded coaxial jets was measured in an anechoic chamber for different operating conditions with the same total thrust, mass flow, and exit area. The shape of the measured noise spectrum at different angles to the jet axis was found to agree with spectral shapes for single, axisymmetric jets. Based on these spectra, the sound was characterized as being generated by large turbulent structures or fine-scale turbulence. Modeling the large scale structures as instability waves, a stability analysis was conducted for the coaxial jets to identify the growing and decaying instability waves in each shear layer and predict their noise radiation pattern outside the jet. When compared to measured directivity, the analysis identified the region downstream of the outer potential core, where the two shear layers were merging, as the source of the peak radiated noise where instability waves, with their origin in the inner shear layer, reach their maximum amplitude. Numerical computations were also performed using a linearized Euler equation solver. Those results were compared to both the results from the instability wave analysis and to measured data.
Regional climates in the GISS global circulation model - Synoptic-scale circulation
NASA Technical Reports Server (NTRS)
Hewitson, B.; Crane, R. G.
1992-01-01
A major weakness of current general circulation models (GCMs) is their perceived inability to predict reliably the regional consequences of a global-scale change, and it is these regional-scale predictions that are necessary for studies of human-environmental response. For large areas of the extratropics, the local climate is controlled by the synoptic-scale atmospheric circulation, and it is the purpose of this paper to evaluate the synoptic-scale circulation of the Goddard Institute for Space Studies (GISS) GCM. A methodology for validating the daily synoptic circulation using Principal Component Analysis is described, and the methodology is then applied to the GCM simulation of sea level pressure over the continental United States (excluding Alaska). The analysis demonstrates that the GISS 4 x 5 deg GCM Model II effectively simulates the synoptic-scale atmospheric circulation over the United States. The modes of variance describing the atmospheric circulation of the model are comparable to those found in the observed data, and these modes explain similar amounts of variance in their respective datasets. The temporal behavior of these circulation modes in the synoptic time frame are also comparable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cree, Johnathan Vee; Delgado-Frias, Jose
Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configuremore » the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.« less
Using SQL Databases for Sequence Similarity Searching and Analysis.
Pearson, William R; Mackey, Aaron J
2017-09-13
Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Yu, Yadong; Li, Tao; Wu, Na; Ren, Lujing; Jiang, Ling; Ji, Xiaojun; Huang, He
2016-11-30
Arachidonic acid (ARA) is an important polyunsaturated fatty acid having various beneficial physiological effects on the human body. The aging of Mortierella alpina has long been known to significantly improve ARA yield, but the exact mechanism is still elusive. Herein, multiple approaches including large-scale label-free comparative proteomics were employed to systematically investigate the mechanism mentioned above. Upon ultrastructural observation, abnormal mitochondria were found to aggregate around shrunken lipid droplets. Proteomics analysis revealed a total of 171 proteins with significant alterations of expression during aging. Pathway analysis suggested that reactive oxygen species (ROS) were accumulated and stimulated the activation of the malate/pyruvate cycle and isocitrate dehydrogenase, which might provide additional NADPH for ARA synthesis. EC 4.2.1.17-hydratase might be a key player in ARA accumulation during aging. These findings provide a valuable resource for efforts to further improve the ARA content in the oil produced by aging M. alpina.
ERIC Educational Resources Information Center
Wall, Kate; Higgins, Steve; Remedios, Richard; Rafferty, Victoria; Tiplady, Lucy
2013-01-01
A key challenge of visual methodology is how to combine large-scale qualitative data sets with epistemologically acceptable and rigorous analysis techniques. The authors argue that a pragmatic approach drawing on ideas from mixed methods is helpful to open up the full potential of visual data. However, before one starts to "mix" the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrinan, Thomas; Leigh, Jason; Renambot, Luc
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multi-user visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronizedmore » sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.« less
Planck data versus large scale structure: Methods to quantify discordance
NASA Astrophysics Data System (ADS)
Charnock, Tom; Battye, Richard A.; Moss, Adam
2017-06-01
Discordance in the Λ cold dark matter cosmological model can be seen by comparing parameters constrained by cosmic microwave background (CMB) measurements to those inferred by probes of large scale structure. Recent improvements in observations, including final data releases from both Planck and SDSS-III BOSS, as well as improved astrophysical uncertainty analysis of CFHTLenS, allows for an update in the quantification of any tension between large and small scales. This paper is intended, primarily, as a discussion on the quantifications of discordance when comparing the parameter constraints of a model when given two different data sets. We consider Kullback-Leibler divergence, comparison of Bayesian evidences and other statistics which are sensitive to the mean, variance and shape of the distributions. However, as a byproduct, we present an update to the similar analysis in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508], where we find that, considering new data and treatment of priors, the constraints from the CMB and from a combination of large scale structure (LSS) probes are in greater agreement and any tension only persists to a minor degree. In particular, we find the parameter constraints from the combination of LSS probes which are most discrepant with the Planck 2015 +Pol +BAO parameter distributions can be quantified at a ˜2.55 σ tension using the method introduced in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508]. If instead we use the distributions constrained by the combination of LSS probes which are in greatest agreement with those from Planck 2015 +Pol +BAO this tension is only 0.76 σ .
Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model
NASA Astrophysics Data System (ADS)
Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.
2017-12-01
This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.
NASA Astrophysics Data System (ADS)
Khatri, Kshitij; Pu, Yi; Klein, Joshua A.; Wei, Juan; Costello, Catherine E.; Lin, Cheng; Zaia, Joseph
2018-04-01
Analysis of singly glycosylated peptides has evolved to a point where large-scale LC-MS analyses can be performed at almost the same scale as proteomics experiments. While collisionally activated dissociation (CAD) remains the mainstay of bottom-up analyses, it performs poorly for the middle-down analysis of multiply glycosylated peptides. With improvements in instrumentation, electron-activated dissociation (ExD) modes are becoming increasingly prevalent for proteomics experiments and for the analysis of fragile modifications such as glycosylation. While these methods have been applied for glycopeptide analysis in isolated studies, an organized effort to compare their efficiencies, particularly for analysis of multiply glycosylated peptides (termed here middle-down glycoproteomics), has not been made. We therefore compared the performance of different ExD modes for middle-down glycopeptide analyses. We identified key features among the different dissociation modes and show that increased electron energy and supplemental activation provide the most useful data for middle-down glycopeptide analysis. [Figure not available: see fulltext.
Pao, Sheng-Ying; Lin, Win-Li; Hwang, Ming-Jing
2006-01-01
Background Screening for differentially expressed genes on the genomic scale and comparative analysis of the expression profiles of orthologous genes between species to study gene function and regulation are becoming increasingly feasible. Expressed sequence tags (ESTs) are an excellent source of data for such studies using bioinformatic approaches because of the rich libraries and tremendous amount of data now available in the public domain. However, any large-scale EST-based bioinformatics analysis must deal with the heterogeneous, and often ambiguous, tissue and organ terms used to describe EST libraries. Results To deal with the issue of tissue source, in this work, we carefully screened and organized more than 8 million human and mouse ESTs into 157 human and 108 mouse tissue/organ categories, to which we applied an established statistic test using different thresholds of the p value to identify genes differentially expressed in different tissues. Further analysis of the tissue distribution and level of expression of human and mouse orthologous genes showed that tissue-specific orthologs tended to have more similar expression patterns than those lacking significant tissue specificity. On the other hand, a number of orthologs were found to have significant disparity in their expression profiles, hinting at novel functions, divergent regulation, or new ortholog relationships. Conclusion Comprehensive statistics on the tissue-specific expression of human and mouse genes were obtained in this very large-scale, EST-based analysis. These statistical results have been organized into a database, freely accessible at our website , for easy searching of human and mouse tissue-specific genes and for investigating gene expression profiles in the context of comparative genomics. Comparative analysis showed that, although highly tissue-specific genes tend to exhibit similar expression profiles in human and mouse, there are significant exceptions, indicating that orthologous genes, while sharing basic genomic properties, could result in distinct phenotypes. PMID:16626500
Spectral sum rules for confining large-N theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherman, Aleksey; McGady, David A.; Yamazaki, Masahito
2016-06-17
We consider asymptotically-free four-dimensional large-$N$ gauge theories with massive fermionic and bosonic adjoint matter fields, compactified on squashed three-spheres, and examine their regularized large-$N$ confined-phase spectral sums. The analysis is done in the limit of vanishing ’t Hooft coupling, which is justified by taking the size of the compactification manifold to be small compared to the inverse strong scale Λ ₋1. We find our results motivate us to conjecture some universal spectral sum rules for these large $N$ gauge theories.
NASA Astrophysics Data System (ADS)
Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O. P.; Butala, M.; Mannucci, A. J.
2014-12-01
In this research, we report intermediate scale plasma density irregularities in the high-latitude ionosphere inferred from high-resolution radio occultation (RO) measurements in the CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) - GPS (Global Positioning System) satellites radio link. The high inclination of the CASSIOPE satellite and high rate of signal receptionby the occultation antenna of the GPS Attitude, Positioning and Profiling (GAP) instrument on the Enhanced Polar Outflow Probe platform on CASSIOPE enable a high temporal and spatial resolution investigation of the dynamics of the polar ionosphere, magnetosphere-ionospherecoupling, solar wind effects, etc. with unprecedented details compared to that possible in the past. We have carried out high spatial resolution analysis in altitude and geomagnetic latitude of scintillation-producing plasma density irregularities in the polar ionosphere. Intermediate scale, scintillation-producing plasma density irregularities, which corresponds to 2 to 40 km spatial scales were inferred by applying multi-scale spectral analysis on the RO phase delay measurements. Using our multi-scale spectral analysis approach and Polar Operational Environmental Satellites (POES) and Defense Meteorological Satellite Program (DMSP) observations, we infer that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap regions. In specific terms, we found that large length scales and and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap region. Hence, the irregularity scales and phase scintillation characteristics are a function of the solar wind and the magnetospheric forcing. Multi-scale analysis may become a powerful diagnostic tool for characterizing how the ionosphere is dynamically driven by these factors.
Muthamilarasan, Mehanathan; Venkata Suresh, B.; Pandey, Garima; Kumari, Kajal; Parida, Swarup Kumar; Prasad, Manoj
2014-01-01
Generating genomic resources in terms of molecular markers is imperative in molecular breeding for crop improvement. Though development and application of microsatellite markers in large-scale was reported in the model crop foxtail millet, no such large-scale study was conducted for intron-length polymorphic (ILP) markers. Considering this, we developed 5123 ILP markers, of which 4049 were physically mapped onto 9 chromosomes of foxtail millet. BLAST analysis of 5123 expressed sequence tags (ESTs) suggested the function for ∼71.5% ESTs and grouped them into 5 different functional categories. About 440 selected primer pairs representing the foxtail millet genome and the different functional groups showed high-level of cross-genera amplification at an average of ∼85% in eight millets and five non-millet species. The efficacy of the ILP markers for distinguishing the foxtail millet is demonstrated by observed heterozygosity (0.20) and Nei's average gene diversity (0.22). In silico comparative mapping of physically mapped ILP markers demonstrated substantial percentage of sequence-based orthology and syntenic relationship between foxtail millet chromosomes and sorghum (∼50%), maize (∼46%), rice (∼21%) and Brachypodium (∼21%) chromosomes. Hence, for the first time, we developed large-scale ILP markers in foxtail millet and demonstrated their utility in germplasm characterization, transferability, phylogenetics and comparative mapping studies in millets and bioenergy grass species. PMID:24086082
Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki
2015-06-01
Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.
Exclusively visual analysis of classroom group interactions
NASA Astrophysics Data System (ADS)
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-12-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.
Multiresolution comparison of precipitation datasets for large-scale models
NASA Astrophysics Data System (ADS)
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Multi-level structure in the large scale distribution of optically luminous galaxies
NASA Astrophysics Data System (ADS)
Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen
1992-04-01
Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.
CoCoNUT: an efficient system for the comparison and analysis of genomes
2008-01-01
Background Comparative genomics is the analysis and comparison of genomes from different species. This area of research is driven by the large number of sequenced genomes and heavily relies on efficient algorithms and software to perform pairwise and multiple genome comparisons. Results Most of the software tools available are tailored for one specific task. In contrast, we have developed a novel system CoCoNUT (Computational Comparative geNomics Utility Toolkit) that allows solving several different tasks in a unified framework: (1) finding regions of high similarity among multiple genomic sequences and aligning them, (2) comparing two draft or multi-chromosomal genomes, (3) locating large segmental duplications in large genomic sequences, and (4) mapping cDNA/EST to genomic sequences. Conclusion CoCoNUT is competitive with other software tools w.r.t. the quality of the results. The use of state of the art algorithms and data structures allows CoCoNUT to solve comparative genomics tasks more efficiently than previous tools. With the improved user interface (including an interactive visualization component), CoCoNUT provides a unified, versatile, and easy-to-use software tool for large scale studies in comparative genomics. PMID:19014477
A multiscale model for reinforced concrete with macroscopic variation of reinforcement slip
NASA Astrophysics Data System (ADS)
Sciegaj, Adam; Larsson, Fredrik; Lundgren, Karin; Nilenius, Filip; Runesson, Kenneth
2018-06-01
A single-scale model for reinforced concrete, comprising the plain concrete continuum, reinforcement bars and the bond between them, is used as a basis for deriving a two-scale model. The large-scale problem, representing the "effective" reinforced concrete solid, is enriched by an effective reinforcement slip variable. The subscale problem on a Representative Volume Element (RVE) is defined by Dirichlet boundary conditions. The response of the RVEs of different sizes was investigated by means of pull-out tests. The resulting two-scale formulation was used in an FE^2 analysis of a deep beam. Load-deflection relations, crack widths, and strain fields were compared to those obtained from a single-scale analysis. Incorporating the independent macroscopic reinforcement slip variable resulted in a more pronounced localisation of the effective strain field. This produced a more accurate estimation of the crack widths than the two-scale formulation neglecting the effective reinforcement slip variable.
Jia, Tongying; Yuan, Huiyun
2017-04-12
Many large-scaled public hospitals have established branched hospitals in China. This study is to provide evidence for strategy making on the management and development of multi-branched hospitals by evaluating and comparing the operational efficiencies of different hospitals before and after their establishment of branched hospitals. DEA (Data Envelopment Analysis) window analysis was performed on a 7-year data pool from five public hospitals provided by health authorities and institutional surveys. The operational efficiencies of sample hospitals measured in this study (including technical efficiency, pure technical efficiency and scale efficiency) had overall trends towards increase during this 7-year period of time, however, a temporary downturn occurred shortly after the establishment of branched hospitals; pure technical efficiency contributed more to the improvement of technical efficiency compared to scale efficiency. The establishment of branched-hospitals did not lead to a long-term negative effect on hospital operational efficiencies. Our data indicated the importance of improving scale efficiency via the optimization of organizational management, as well as the advantage of a different form of branch-establishment, merging and reorganization. This study brought an insight into the practical application of DEA window analysis on the assessment of hospital operational efficiencies.
Comparing Achievement between K-8 and Middle Schools: A Large-Scale Empirical Study
ERIC Educational Resources Information Center
Byrnes, Vaughan; Ruby, Allen
2007-01-01
This study compares middle schools to K-8 schools, as well as to newly formed K-8 schools that are part of a K-8 conversion policy. The outcome is student achievement, and our sample includes 40,883 eighth-grade students from 95 schools across five cohorts. The analysis uses multilevel modeling to account for student, cohort, and school-level…
An Analysis of Large-Scale Writing Assessments in Canada (Grades 5-8)
ERIC Educational Resources Information Center
Peterson, Shelley Stagg; McClay, Jill; Main, Kristin
2011-01-01
This paper reports on an analysis of large-scale assessments of Grades 5-8 students' writing across 10 provinces and 2 territories in Canada. Theory, classroom practice, and the contributions and constraints of large-scale writing assessment are brought together with a focus on Grades 5-8 writing in order to provide both a broad view of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100
2015-01-15
In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less
NASA Technical Reports Server (NTRS)
Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)
2001-01-01
Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.
Static analysis techniques for semiautomatic synthesis of message passing software skeletons
Sottile, Matthew; Dagit, Jason; Zhang, Deli; ...
2015-06-29
The design of high-performance computing architectures demands performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a “program skeleton” that we discuss in this article is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed formore » the purposes of the skeleton. In this work, we develop a semiautomatic approach for extracting program skeletons based on compiler program analysis. Finally, we demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator.« less
Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan
2018-01-01
A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.
Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.
Chen, Rong; Nixon, Erika; Herskovits, Edward
2016-04-01
Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.
On the spatial distribution of small heavy particles in homogeneous shear turbulence
NASA Astrophysics Data System (ADS)
Nicolai, C.; Jacob, B.; Piva, R.
2013-08-01
We report on a novel experiment aimed at investigating the effects induced by a large-scale velocity gradient on the turbulent transport of small heavy particles. To this purpose, a homogeneous shear flow at Reλ = 540 and shear parameter S* = 4.5 is set-up and laden with glass spheres whose size d is comparable with the Kolmogorov lengthscale η of the flow (d/η ≈ 1). The particle Stokes number is approximately 0.3. The analysis of the instantaneous particle fields by means of Voronoï diagrams confirms the occurrence of intense turbulent clustering at small scales, as observed in homogeneous isotropic flows. It also indicates that the anisotropy of the velocity fluctuations induces a preferential orientation of the particle clusters. In order to characterize the fine-scale features of the dispersed phase, spatial correlations of the particle field are employed in conjunction with statistical tools recently developed for anisotropic turbulence. The scale-by-scale analysis of the particle field clarifies that isotropy of the particle distribution is tendentially recovered at small separations, even though the signatures of the mean shear persist down to smaller scales as compared to the fluid velocity field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seljak, Uroš, E-mail: useljak@berkeley.edu
On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less
Gray, Nicola; Lewis, Matthew R; Plumb, Robert S; Wilson, Ian D; Nicholson, Jeremy K
2015-06-05
A new generation of metabolic phenotyping centers are being created to meet the increasing demands of personalized healthcare, and this has resulted in a major requirement for economical, high-throughput metabonomic analysis by liquid chromatography-mass spectrometry (LC-MS). Meeting these new demands represents an emerging bioanalytical problem that must be solved if metabolic phenotyping is to be successfully applied to large clinical and epidemiological sample sets. Ultraperformance (UP)LC-MS-based metabolic phenotyping, based on 2.1 mm i.d. LC columns, enables comprehensive metabolic phenotyping but, when employed for the analysis of thousands of samples, results in high solvent usage. The use of UPLC-MS employing 1 mm i.d. columns for metabolic phenotyping rather than the conventional 2.1 mm i.d. methodology shows that the resulting optimized microbore method provided equivalent or superior performance in terms of peak capacity, sensitivity, and robustness. On average, we also observed, when using the microbore scale separation, an increase in response of 2-3 fold over that obtained with the standard 2.1 mm scale method. When applied to the analysis of human urine, the 1 mm scale method showed no decline in performance over the course of 1000 analyses, illustrating that microbore UPLC-MS represents a viable alternative to conventional 2.1 mm i.d. formats for routine large-scale metabolic profiling studies while also resulting in a 75% reduction in solvent usage. The modest increase in sensitivity provided by this methodology also offers the potential to either reduce sample consumption or increase the number of metabolite features detected with confidence due to the increased signal-to-noise ratios obtained. Implementation of this miniaturized UPLC-MS method of metabolic phenotyping results in clear analytical, economic, and environmental benefits for large-scale metabolic profiling studies with similar or improved analytical performance compared to conventional UPLC-MS.
Goch, Caspar J; Stieltjes, Bram; Henze, Romy; Hering, Jan; Poustka, Luise; Meinzer, Hans-Peter; Maier-Hein, Klaus H
2014-05-01
Diagnosis of autism spectrum disorders (ASD) is difficult, as symptoms vary greatly and are difficult to quantify objectively. Recent work has focused on the assessment of non-invasive diffusion tensor imaging-based biomarkers that reflect the microstructural characteristics of neuronal pathways in the brain. While tractography-based approaches typically analyze specific structures of interest, a graph-based large-scale network analysis of the connectome can yield comprehensive measures of larger-scale architectural patterns in the brain. Commonly applied global network indices, however, do not provide any specificity with respect to functional areas or anatomical structures. Aim of this work was to assess the concept of network centrality as a tool to perform locally specific analysis without disregarding the global network architecture and compare it to other popular network indices. We create connectome networks from fiber tractographies and parcellations of the human brain and compute global network indices as well as local indices for Wernicke's Area, Broca's Area and the Motor Cortex. Our approach was evaluated on 18 children suffering from ASD and 18 typically developed controls using magnetic resonance imaging-based cortical parcellations in combination with diffusion tensor imaging tractography. We show that the network centrality of Wernicke's area is significantly (p<0.001) reduced in ASD, while the motor cortex, which was used as a control region, did not show significant alterations. This could reflect the reduced capacity for comprehension of language in ASD. The betweenness centrality could potentially be an important metric in the development of future diagnostic tools in the clinical context of ASD diagnosis. Our results further demonstrate the applicability of large-scale network analysis tools in the domain of region-specific analysis with a potential application in many different psychological disorders.
Sensitivity analysis of key components in large-scale hydroeconomic models
NASA Astrophysics Data System (ADS)
Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.
2008-12-01
This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.
Herbivorous fishes, ecosystem function and mobile links on coral reefs
NASA Astrophysics Data System (ADS)
Welsh, J. Q.; Bellwood, D. R.
2014-06-01
Understanding large-scale movement of ecologically important taxa is key to both species and ecosystem management. Those species responsible for maintaining functional connectivity between habitats are often called mobile links and are regarded as essential elements of resilience. By providing connectivity, they support resilience across spatial scales. Most marine organisms, including fishes, have long-term, biogeographic-scale connectivity through larval movement. Although most reef species are highly site attached after larval settlement, some taxa may also be able to provide rapid, reef-scale connectivity as adults. On coral reefs, the identity of such taxa and the extent of their mobility are not yet known. We use acoustic telemetry to monitor the movements of Kyphosus vaigiensis, one of the few reef fishes that feeds on adult brown macroalgae. Unlike other benthic herbivorous fish species, it also exhibits large-scale (>2 km) movements. Individual K. vaigiensis cover, on average, a 2.5 km length of reef (11 km maximum) each day. These large-scale movements suggest that this species may act as a mobile link, providing functional connectivity, should the need arise, and helping to support functional processes across habitats and spatial scales. An analysis of published studies of home ranges in reef fishes found a consistent relationship between home range size and body length. K. vaigiensis is the sole herbivore to depart significantly from the expected home range-body size relationship, with home range sizes more comparable to exceptionally mobile large pelagic predators rather than other reef herbivores. While the large-scale movements of K. vaigiensis reveal its potential capacity to enhance resilience over large areas, it also emphasizes the potential limitations of small marine reserves to protect some herbivore populations.
Danis, Ildiko; Scheuring, Noemi; Papp, Eszter; Czinner, Antal
2012-06-01
A new instrument for assessing depressive mood, the first version of Depression Scale Questionnaire (DS1K) was published in 2008 by Halmai et al. This scale was used in our large sample study, in the framework of the For Healthy Offspring project, involving parents of young children. The original questionnaire was developed in small samples, so our aim was to assist further development of the instrument by the psychometric analysis of the data in our large sample (n=1164). The DS1K scale was chosen to measure the parents' mood and mental state in the For Healthy Offspring project. The questionnaire was completed by 1063 mothers and 328 fathers, yielding a heterogenous sample with respect to age and socio-demographic status. Analyses included main descriptive statistics, establishing the scales' inner consistency and some comparisons. Results were checked in our original and multiple imputed datasets as well. According to our results the reliability of our scale was much worse than in the original study (Cronbach alpha: 0.61 versus 0.88). During the detailed item-analysis it became clear that two items contributed to the observed decreased coherence. We assumed a problem related to misreading in case of one of these items. This assumption was checked by cross-analysis by the assumed reading level. According to our results the reliability of the scale was increased in both the lower and higher education level groups if we did not include one or both of these problematic items. However, as the number of items decreased, the relative sensitivity of the scale was also reduced, with fewer persons categorized in the risk group compared to the original scale. We suggest for the authors as an alternative solution to redefine the problematic items and retest the reliability of the measurement in a sample with diverse socio-demographic characteristics.
Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. NCEE 2012-4025
ERIC Educational Resources Information Center
Gleason, Philip M.; Resch, Alexandra M.; Berk, Jillian A.
2012-01-01
This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials--the Education Technology Evaluation and the Teach for America Study--to provide evidence on the performance of…
NASA Astrophysics Data System (ADS)
Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.
2016-06-01
Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.
Scaling of the velocity fluctuations in turbulent channels up to Reτ=2003
NASA Astrophysics Data System (ADS)
Hoyas, Sergio; Jiménez, Javier
2006-01-01
A new numerical simulation of a turbulent channel in a large box at Reτ=2003 is described and briefly compared with simulations at lower Reynolds numbers and with experiments. Some of the fluctuation intensities, especially the streamwise velocity, do not scale well in wall units, both near and away from the wall. Spectral analysis traces the near-wall scaling failure to the interaction of the logarithmic layer with the wall. The present statistics can be downloaded from http://torroja.dmt.upm.es/ftp/channels. Further ones will be added to the site as they become available.
Large-scale impacts of herbivores on the structural diversity of African savannas
Asner, Gregory P.; Levick, Shaun R.; Kennedy-Bowdoin, Ty; Knapp, David E.; Emerson, Ruth; Jacobson, James; Colgan, Matthew S.; Martin, Roberta E.
2009-01-01
African savannas are undergoing management intensification, and decision makers are increasingly challenged to balance the needs of large herbivore populations with the maintenance of vegetation and ecosystem diversity. Ensuring the sustainability of Africa's natural protected areas requires information on the efficacy of management decisions at large spatial scales, but often neither experimental treatments nor large-scale responses are available for analysis. Using a new airborne remote sensing system, we mapped the three-dimensional (3-D) structure of vegetation at a spatial resolution of 56 cm throughout 1640 ha of savanna after 6-, 22-, 35-, and 41-year exclusions of herbivores, as well as in unprotected areas, across Kruger National Park in South Africa. Areas in which herbivores were excluded over the short term (6 years) contained 38%–80% less bare ground compared with those that were exposed to mammalian herbivory. In the longer-term (> 22 years), the 3-D structure of woody vegetation differed significantly between protected and accessible landscapes, with up to 11-fold greater woody canopy cover in the areas without herbivores. Our maps revealed 2 scales of ecosystem response to herbivore consumption, one broadly mediated by geologic substrate and the other mediated by hillslope-scale variation in soil nutrient availability and moisture conditions. Our results are the first to quantitatively illustrate the extent to which herbivores can affect the 3-D structural diversity of vegetation across large savanna landscapes. PMID:19258457
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.
Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H
2017-04-01
Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
ERIC Educational Resources Information Center
Oon, Pey Tee; Subramaniam, R.
2018-01-01
We report here on a comparative study of middle school students' attitudes towards science involving three countries: England, Singapore and the U.S.A. Complete attitudinal data sets from TIMSS (Trends in International Mathematics and Science Study) 2011 were used, thus giving a very large sample size (N = 20,246), compared to other studies in the…
Scale growth of structures in the turbulent boundary layer with a rod-roughened wall
NASA Astrophysics Data System (ADS)
Lee, Jin; Kim, Jung Hoon; Lee, Jae Hwa
2016-01-01
Direct numerical simulation of a turbulent boundary layer over a rod-roughened wall is performed with a long streamwise domain to examine the streamwise-scale growth mechanism of streamwise velocity fluctuating structures in the presence of two-dimensional (2-D) surface roughness. An instantaneous analysis shows that there is a slightly larger population of long structures with a small helix angle (spanwise inclinations relative to streamwise) and a large spanwise width over the rough-wall compared to that over a smooth-wall. Further inspection of time-evolving instantaneous fields clearly exhibits that adjacent long structures combine to form a longer structure through a spanwise merging process over the rough-wall; moreover, spanwise merging for streamwise scale growth is expected to occur frequently over the rough-wall due to the large spanwise scales generated by the 2-D roughness. Finally, we examine the influence of a large width and a small helix angle of the structures over the rough-wall with regard to spatial two-point correlation. The results show that these factors can increase the streamwise coherence of the structures in a statistical sense.
An integrated approach to reconstructing genome-scale transcriptional regulatory networks
Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.; ...
2015-02-27
Transcriptional regulatory networks (TRNs) program cells to dynamically alter their gene expression in response to changing internal or environmental conditions. In this study, we develop a novel workflow for generating large-scale TRN models that integrates comparative genomics data, global gene expression analyses, and intrinsic properties of transcription factors (TFs). An assessment of this workflow using benchmark datasets for the well-studied γ-proteobacterium Escherichia coli showed that it outperforms expression-based inference approaches, having a significantly larger area under the precision-recall curve. Further analysis indicated that this integrated workflow captures different aspects of the E. coli TRN than expression-based approaches, potentially making themmore » highly complementary. We leveraged this new workflow and observations to build a large-scale TRN model for the α-Proteobacterium Rhodobacter sphaeroides that comprises 120 gene clusters, 1211 genes (including 93 TFs), 1858 predicted protein-DNA interactions and 76 DNA binding motifs. We found that ~67% of the predicted gene clusters in this TRN are enriched for functions ranging from photosynthesis or central carbon metabolism to environmental stress responses. We also found that members of many of the predicted gene clusters were consistent with prior knowledge in R. sphaeroides and/or other bacteria. Experimental validation of predictions from this R. sphaeroides TRN model showed that high precision and recall was also obtained for TFs involved in photosynthesis (PpsR), carbon metabolism (RSP_0489) and iron homeostasis (RSP_3341). In addition, this integrative approach enabled generation of TRNs with increased information content relative to R. sphaeroides TRN models built via other approaches. We also show how this approach can be used to simultaneously produce TRN models for each related organism used in the comparative genomics analysis. Our results highlight the advantages of integrating comparative genomics of closely related organisms with gene expression data to assemble large-scale TRN models with high-quality predictions.« less
Genomic analysis of organismal complexity in the multicellular green alga Volvox carteri
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prochnik, Simon E.; Umen, James; Nedelcu, Aurora
2010-07-01
Analysis of the Volvox carteri genome reveals that this green alga's increased organismal complexity and multicellularity are associated with modifications in protein families shared with its unicellular ancestor, and not with large-scale innovations in protein coding capacity. The multicellular green alga Volvox carteri and its morphologically diverse close relatives (the volvocine algae) are uniquely suited for investigating the evolution of multicellularity and development. We sequenced the 138 Mb genome of V. carteri and compared its {approx}14,500 predicted proteins to those of its unicellular relative, Chlamydomonas reinhardtii. Despite fundamental differences in organismal complexity and life history, the two species have similarmore » protein-coding potentials, and few species-specific protein-coding gene predictions. Interestingly, volvocine algal-specific proteins are enriched in Volvox, including those associated with an expanded and highly compartmentalized extracellular matrix. Our analysis shows that increases in organismal complexity can be associated with modifications of lineage-specific proteins rather than large-scale invention of protein-coding capacity.« less
NASA Astrophysics Data System (ADS)
Camassa, Roberto; McLaughlin, Richard M.; Viotti, Claudio
2010-11-01
The time evolution of a passive scalar advected by parallel shear flows is studied for a class of rapidly varying initial data. Such situations are of practical importance in a wide range of applications from microfluidics to geophysics. In these contexts, it is well-known that the long-time evolution of the tracer concentration is governed by Taylor's asymptotic theory of dispersion. In contrast, we focus here on the evolution of the tracer at intermediate time scales. We show how intermediate regimes can be identified before Taylor's, and in particular, how the Taylor regime can be delayed indefinitely by properly manufactured initial data. A complete characterization of the sorting of these time scales and their associated spatial structures is presented. These analytical predictions are compared with highly resolved numerical simulations. Specifically, this comparison is carried out for the case of periodic variations in the streamwise direction on the short scale with envelope modulations on the long scales, and show how this structure can lead to "anomalously" diffusive transients in the evolution of the scalar onto the ultimate regime governed by Taylor dispersion. Mathematically, the occurrence of these transients can be viewed as a competition in the asymptotic dominance between large Péclet (Pe) numbers and the long/short scale aspect ratios (LVel/LTracer≡k), two independent nondimensional parameters of the problem. We provide analytical predictions of the associated time scales by a modal analysis of the eigenvalue problem arising in the separation of variables of the governing advection-diffusion equation. The anomalous time scale in the asymptotic limit of large k Pe is derived for the short scale periodic structure of the scalar's initial data, for both exactly solvable cases and in general with WKBJ analysis. In particular, the exactly solvable sawtooth flow is especially important in that it provides a short cut to the exact solution to the eigenvalue problem for the physically relevant vanishing Neumann boundary conditions in linear-shear channel flow. We show that the life of the corresponding modes at large Pe for this case is shorter than the ones arising from shear free zones in the fluid's interior. A WKBJ study of the latter modes provides a longer intermediate time evolution. This part of the analysis is technical, as the corresponding spectrum is dominated by asymptotically coalescing turning points in the limit of large Pe numbers. When large scale initial data components are present, the transient regime of the WKBJ (anomalous) modes evolves into one governed by Taylor dispersion. This is studied by a regular perturbation expansion of the spectrum in the small wavenumber regimes.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heo, Yeonsook; Augenbroe, Godfried; Graziano, Diane
2015-05-01
The increasing interest in retrofitting of existing buildings is motivated by the need to make a major contribution to enhancing building energy efficiency and reducing energy consumption and CO2 emission by the built environment. This paper examines the relevance of calibration in model-based analysis to support decision-making for energy and carbon efficiency retrofits of individual buildings and portfolios of buildings. The authors formulate a set of real retrofit decision-making situations and evaluate the role of calibration by using a case study that compares predictions and decisions from an uncalibrated model with those of a calibrated model. The case study illustratesmore » both the mechanics and outcomes of a practical alternative to the expert- and time-intense application of dynamic energy simulation models for large-scale retrofit decision-making under uncertainty.« less
A Comparison of Obsessive-Compulsive Personality Disorder Scales
Samuel, Douglas B.; Widiger, Thomas A.
2010-01-01
The current study utilized a large undergraduate sample (n = 536), oversampled for DSM-IV-TR obsessive-compulsive personality disorder (OCPD) pathology, to compare eight self-report measures of OCPD. No prior study has compared more than three measures and the results indicated that the scales had only moderate convergent validity. We also went beyond the existing literature to compare these scales to two external reference points: Their relationships with a well established measure of the five-factor model of personality (FFM) and clinicians' ratings of their coverage of the DSM-IV-TR criterion set. When the FFM was used as a point of comparison the results suggested important differences among the measures with respect to their divergent representation of conscientiousness, neuroticism, and agreeableness. Additionally, an analysis of the construct coverage indicated that the measures also varied in terms of their representation of particular diagnostic criteria. For example, while some scales contained items distributed across the diagnostic criteria, others were concentrated more heavily on particular features of the DSM-IV-TR disorder. PMID:20408023
Do large-scale assessments measure students' ability to integrate scientific knowledge?
NASA Astrophysics Data System (ADS)
Lee, Hee-Sun
2010-03-01
Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.
Fitting a Point Cloud to a 3d Polyhedral Surface
NASA Astrophysics Data System (ADS)
Popov, E. V.; Rotkov, S. I.
2017-05-01
The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.
Revealing the Hidden Relationship by Sparse Modules in Complex Networks with a Large-Scale Analysis
Jiao, Qing-Ju; Huang, Yan; Liu, Wei; Wang, Xiao-Fan; Chen, Xiao-Shuang; Shen, Hong-Bin
2013-01-01
One of the remarkable features of networks is module that can provide useful insights into not only network organizations but also functional behaviors between their components. Comprehensive efforts have been devoted to investigating cohesive modules in the past decade. However, it is still not clear whether there are important structural characteristics of the nodes that do not belong to any cohesive module. In order to answer this question, we performed a large-scale analysis on 25 complex networks with different types and scales using our recently developed BTS (bintree seeking) algorithm, which is able to detect both cohesive and sparse modules in the network. Our results reveal that the sparse modules composed by the cohesively isolated nodes widely co-exist with the cohesive modules. Detailed analysis shows that both types of modules provide better characterization for the division of a network into functional units than merely cohesive modules, because the sparse modules possibly re-organize the nodes in the so-called cohesive modules, which lack obvious modular significance, into meaningful groups. Compared with cohesive modules, the sizes of sparse ones are generally smaller. Sparse modules are also found to have preferences in social and biological networks than others. PMID:23762457
Survival analysis for a large scale forest health issue: Missouri oak decline
C.W. Woodall; P.L. Grambsch; W. Thomas; W.K. Moser
2005-01-01
Survival analysis methodologies provide novel approaches for forest mortality analysis that may aid in detecting, monitoring, and mitigating of large-scale forest health issues. This study examined survivor analysis for evaluating a regional forest health issue - Missouri oak decline. With a statewide Missouri forest inventory, log-rank tests of the effects of...
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
Wang, Guan; Zhao, Junfei; Haringa, Cees; Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang; Deshmukh, Amit T; van Gulik, Walter; Heijnen, Joseph J; Noorman, Henk J
2018-05-01
In a 54 m 3 large-scale penicillin fermentor, the cells experience substrate gradient cycles at the timescales of global mixing time about 20-40 s. Here, we used an intermittent feeding regime (IFR) and a two-compartment reactor (TCR) to mimic these substrate gradients at laboratory-scale continuous cultures. The IFR was applied to simulate substrate dynamics experienced by the cells at full scale at timescales of tens of seconds to minutes (30 s, 3 min and 6 min), while the TCR was designed to simulate substrate gradients at an applied mean residence time (τc) of 6 min. A biological systems analysis of the response of an industrial high-yielding P. chrysogenum strain has been performed in these continuous cultures. Compared to an undisturbed continuous feeding regime in a single reactor, the penicillin productivity (q PenG ) was reduced in all scale-down simulators. The dynamic metabolomics data indicated that in the IFRs, the cells accumulated high levels of the central metabolites during the feast phase to actively cope with external substrate deprivation during the famine phase. In contrast, in the TCR system, the storage pool (e.g. mannitol and arabitol) constituted a large contribution of carbon supply in the non-feed compartment. Further, transcript analysis revealed that all scale-down simulators gave different expression levels of the glucose/hexose transporter genes and the penicillin gene clusters. The results showed that q PenG did not correlate well with exposure to the substrate regimes (excess, limitation and starvation), but there was a clear inverse relation between q PenG and the intracellular glucose level. © 2018 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Proteinortho: detection of (co-)orthologs in large-scale analysis.
Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J
2011-04-28
Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.
Vanacker, Peter; Heldner, Mirjam R; Amiguet, Michael; Faouzi, Mohamed; Cras, Patrick; Ntaios, George; Arnold, Marcel; Mattle, Heinrich P; Gralla, Jan; Fischer, Urs; Michel, Patrik
2016-06-01
Endovascular treatment for acute ischemic stroke with a large vessel occlusion was recently shown to be effective. We aimed to develop a score capable of predicting large vessel occlusion eligible for endovascular treatment in the early hospital management. Retrospective, cohort study. Two tertiary, Swiss stroke centers. Consecutive acute ischemic stroke patients (1,645 patients; Acute STroke Registry and Analysis of Lausanne registry), who had CT angiography within 6 and 12 hours of symptom onset, were categorized according to the occlusion site. Demographic and clinical information was used in logistic regression analysis to derive predictors of large vessel occlusion (defined as intracranial carotid, basilar, and M1 segment of middle cerebral artery occlusions). Based on logistic regression coefficients, an integer score was created and validated internally and externally (848 patients; Bernese Stroke Registry). None. Large vessel occlusions were present in 316 patients (21%) in the derivation and 566 (28%) in the external validation cohort. Five predictors added significantly to the score: National Institute of Health Stroke Scale at admission, hemineglect, female sex, atrial fibrillation, and no history of stroke and prestroke handicap (modified Rankin Scale score, < 2). Diagnostic accuracy in internal and external validation cohorts was excellent (area under the receiver operating characteristic curve, 0.84 both). The score performed slightly better than National Institute of Health Stroke Scale alone regarding prediction error (Wilcoxon signed rank test, p < 0.001) and regarding discriminatory power in derivation and pooled cohorts (area under the receiver operating characteristic curve, 0.81 vs 0.80; DeLong test, p = 0.02). Our score accurately predicts the presence of emergent large vessel occlusions, which are eligible for endovascular treatment. However, incorporation of additional demographic and historical information available on hospital arrival provides minimal incremental predictive value compared with the National Institute of Health Stroke Scale alone.
NASA Astrophysics Data System (ADS)
Jiang, H.; Lin, T.
2017-12-01
Rain-fed corn production systems are subject to sub-seasonal variations of precipitation and temperature during the growing season. As each growth phase has varied inherent physiological process, plants necessitate different optimal environmental conditions during each phase. However, this temporal heterogeneity towards climate variability alongside the lifecycle of crops is often simplified and fixed as constant responses in large scale statistical modeling analysis. To capture the time-variant growing requirements in large scale statistical analysis, we develop and compare statistical models at various spatial and temporal resolutions to quantify the relationship between corn yield and weather factors for 12 corn belt states from 1981 to 2016. The study compares three spatial resolutions (county, agricultural district, and state scale) and three temporal resolutions (crop growth phase, monthly, and growing season) to characterize the effects of spatial and temporal variability. Our results show that the agricultural district model together with growth phase resolution can explain 52% variations of corn yield caused by temperature and precipitation variability. It provides a practical model structure balancing the overfitting problem in county specific model and weak explanation power in state specific model. In US corn belt, precipitation has positive impact on corn yield in growing season except for vegetative stage while extreme heat attains highest sensitivity from silking to dough phase. The results show the northern counties in corn belt area are less interfered by extreme heat but are more vulnerable to water deficiency.
Allometric scaling of UK urban emissions: interpretation and implications for air quality management
NASA Astrophysics Data System (ADS)
MacKenzie, Rob; Barnes, Matt; Whyatt, Duncan; Hewitt, Nick
2016-04-01
Allometry uncovers structures and patterns by relating the characteristics of complex systems to a measure of scale. We present an allometric analysis of air quality for UK urban settlements, beginning with emissions and moving on to consider air concentrations. We consider both airshed-average 'urban background' concentrations (cf. those derived from satellites for NO2) and local pollution 'hotspots'. We show that there is a strong and robust scaling (with respect to population) of the non-point-source emissions of the greenhouse gases carbon dioxide and methane, as well as the toxic pollutants nitrogen dioxide, PM2.5, and 1,3-butadiene. The scaling of traffic-related emissions is not simply a reflection of road length, but rather results from the socio-economic patterning of road-use. The recent controversy regarding diesel vehicle emissions is germane to our study but does not affect our overall conclusions. We next develop an hypothesis for the population-scaling of airshed-average air concentrations, with which we demonstrate that, although average air quality is expected to be worse in large urban centres compared to small urban centres, the overall effect is an economy of scale (i.e., large cities reduce the overall burden of emissions compared to the same population spread over many smaller urban settlements). Our hypothesis explains satellite-derived observations of airshed-average urban NO2 concentrations. The theory derived also explains which properties of nature-based solutions (urban greening) can make a significant contribution at city scale, and points to a hitherto unforeseen opportunity to make large cities cleaner than smaller cities in absolute terms with respect to their airshed-average pollutant concentration.
ERIC Educational Resources Information Center
Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu
2015-01-01
There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…
NASA Astrophysics Data System (ADS)
Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey
2017-04-01
Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value
Comparative analysis and visualization of multiple collinear genomes
2012-01-01
Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
Magnetic fields are observed to exist on all scales in many astrophysical sources such as stars, galaxies, and accretion discs. Understanding the origin of large scale magnetic fields, whereby the field emerges on spatial scales large compared to the fluctuations, has been a particularly long standing challenge. Our physics objective are: 1) what are the minimum ingredients for large-scale dynamo growth? 2) could a large-scale magnetic field grow out of turbulence and sustained despite the presence of dissipation? These questions are fundamental for understanding the large-scale dynamo in both laboratory and astrophysical plasmas. Here, we report major new findings inmore » the area of Large-Scale Dynamo (magnetic field generation).« less
NASA Astrophysics Data System (ADS)
Kadum, Hawwa; Ali, Naseem; Cal, Raúl
2016-11-01
Hot-wire anemometry measurements have been performed on a 3 x 3 wind turbine array to study the multifractality of the turbulent kinetic energy dissipations. A multifractal spectrum and Hurst exponents are determined at nine locations downstream of the hub height, and bottom and top tips. Higher multifractality is found at 0.5D and 1D downstream of the bottom tip and hub height. The second order of the Hurst exponent and combination factor show an ability to predict the flow state in terms of its development. Snapshot proper orthogonal decomposition is used to identify the coherent and incoherent structures and to reconstruct the stochastic velocity using a specific number of the POD eigenfunctions. The accumulation of the turbulent kinetic energy in top tip location exhibits fast convergence compared to the bottom tip and hub height locations. The dissipation of the large and small scales are determined using the reconstructed stochastic velocities. The higher multifractality is shown in the dissipation of the large scale compared to small-scale dissipation showing consistency with the behavior of the original signals.
ERIC Educational Resources Information Center
Powers, Stephen; And Others
Sex differences in attributions for success and failure in algebra of Samoan community college students were examined and compared with attributions of a large group of mainland U.S. students. study included the Mathematics Attribution Scale: Algebra Version (MAS), which assessed students' attributions of achievement in algebra to their effort,…
A Cross-Cultural Comparison of Student Learning Patterns in Higher Education
ERIC Educational Resources Information Center
Marambe, Kosala N.; Vermunt, Jan D.; Boshuizen, Henny P. A.
2012-01-01
The aim of this study was to compare student learning patterns in higher education across different cultures. A meta-analysis was performed on three large-scale studies that had used the same research instrument: the Inventory of learning Styles (ILS). The studies were conducted in the two Asian countries Sri Lanka and Indonesia and the European…
ERIC Educational Resources Information Center
MacMillan, Peter D.
2000-01-01
Compared classical test theory (CTT), generalizability theory (GT), and multifaceted Rasch model (MFRM) approaches to detecting and correcting for rater variability using responses of 4,930 high school students graded by 3 raters on 9 scales. The MFRM approach identified far more raters as different than did the CTT analysis. GT and Rasch…
NASA Astrophysics Data System (ADS)
Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos
2015-04-01
The Greek electricity system is examined for the period 2002-2014. The demand load data are analysed at various time scales (hourly, daily, seasonal and annual) and they are related to the mean daily temperature and the gross domestic product (GDP) of Greece for the same time period. The prediction of energy demand, a product of the Greek Independent Power Transmission Operator, is also compared with the demand load. Interesting results about the change of the electricity demand scheme after the year 2010 are derived. This change is related to the decrease of the GDP, during the period 2010-2014. The results of the analysis will be used in the development of an energy forecasting system which will be a part of a framework for optimal planning of a large-scale hybrid renewable energy system in which hydropower plays the dominant role. Acknowledgement: This research was funded by the Greek General Secretariat for Research and Technology through the research project Combined REnewable Systems for Sustainable ENergy DevelOpment (CRESSENDO; grant number 5145)
Jiang, Zhenhong; He, Fei; Zhang, Ziding
2017-07-01
Through large-scale transcriptional data analyses, we highlighted the importance of plant metabolism in plant immunity and identified 26 metabolic pathways that were frequently influenced by the infection of 14 different pathogens. Reprogramming of plant metabolism is a common phenomenon in plant defense responses. Currently, a large number of transcriptional profiles of infected tissues in Arabidopsis (Arabidopsis thaliana) have been deposited in public databases, which provides a great opportunity to understand the expression patterns of metabolic pathways during plant defense responses at the systems level. Here, we performed a large-scale transcriptome analysis based on 135 previously published expression samples, including 14 different pathogens, to explore the expression pattern of Arabidopsis metabolic pathways. Overall, metabolic genes are significantly changed in expression during plant defense responses. Upregulated metabolic genes are enriched on defense responses, and downregulated genes are enriched on photosynthesis, fatty acid and lipid metabolic processes. Gene set enrichment analysis (GSEA) identifies 26 frequently differentially expressed metabolic pathways (FreDE_Paths) that are differentially expressed in more than 60% of infected samples. These pathways are involved in the generation of energy, fatty acid and lipid metabolism as well as secondary metabolite biosynthesis. Clustering analysis based on the expression levels of these 26 metabolic pathways clearly distinguishes infected and control samples, further suggesting the importance of these metabolic pathways in plant defense responses. By comparing with FreDE_Paths from abiotic stresses, we find that the expression patterns of 26 FreDE_Paths from biotic stresses are more consistent across different infected samples. By investigating the expression correlation between transcriptional factors (TFs) and FreDE_Paths, we identify several notable relationships. Collectively, the current study will deepen our understanding of plant metabolism in plant immunity and provide new insights into disease-resistant crop improvement.
Meta-analysis on Macropore Flow Velocity in Soils
NASA Astrophysics Data System (ADS)
Liu, D.; Gao, M.; Li, H. Y.; Chen, X.; Leung, L. R.
2017-12-01
Macropore flow is ubiquitous in the soils and an important hydrologic process that is not well explained using traditional hydrologic theories. Macropore Flow Velocity (MFV) is an important parameter used to describe macropore flow and quantify its effects on runoff generation and solute transport. However, the dominant factors controlling MFV are still poorly understood and the typical ranges of MFV measured at the field are not defined clearly. To address these issues, we conducted a meta-analysis based on a database created from 246 experiments on MFV collected from 76 journal articles. For a fair comparison, a conceptually unified definition of MFV is introduced to convert the MFV measured with different approaches and at various scales including soil core, field, trench or hillslope scales. The potential controlling factors of MFV considered include scale, travel distance, hydrologic conditions, site factors, macropore morphologies, soil texture, and land use. The results show that MFV is about 2 3 orders of magnitude larger than the corresponding values of saturated hydraulic conductivity. MFV is much larger at the trench and hillslope scale than at the field profile and soil core scales and shows a significant positive correlation with the travel distance. Generally, higher irrigation intensity tends to trigger faster MFV, especially at field profile scale, where MFV and irrigation intensity have significant positive correlation. At the trench and hillslope scale, the presence of large macropores (diameter>10 mm) is a key factor determining MFV. The geometric mean of MFV for sites with large macropores was found to be about 8 times larger than those without large macropores. For sites with large macropores, MFV increases with the macropore diameter. However, no noticeable difference in MFV has been observed among different soil texture and land use. Comparing the existing equations to describe MFV, the Poiseuille equation significantly overestimated the observed values, while the Manning-type equations generate reasonable values. The insights from this study will shed light on future field campaigns and modeling of macropore flow.
Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff
Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; ...
2015-06-01
The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond thosemore » of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.« less
Posttest analysis of a 1:6-scale reinforced concrete reactor containment building
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weatherby, J.R.
In an experiment conducted at Sandia National Laboratories, 1:6-scale model of a reinforced concrete light water reactor containment building was pressurized with nitrogen gas to more than three times its design pressure. The pressurization produced one large tear and several smaller tears in the steel liner plate that functioned as the primary pneumatic seal for the structure. The data collected from the overpressurization test have been used to evaluate and further refine methods of structural analysis that can be used to predict the performance of containment buildings under conditions produced by a severe accident. This report describes posttest finite elementmore » analyses of the 1:6-scale model tests and compares pretest predictions of the structural response to the experimental results. Strain and displacements calculated in axisymmetric finite element analyses of the 1:6-scale model are compared to strains and displacement measured in the experiment. Detailed analyses of the liner plate are also described in the report. The region of the liner surrounding the large tear was analyzed using two different two-dimensional finite elements model. The results from these analyzed indicate that the primary mechanisms that initiated the tear can be captured in a two- dimensional finite element model. Furthermore, the analyses show that studs used to anchor the liner to the concrete wall, played an important role in initiating the liner tear. Three-dimensional finite element analyses of liner plates loaded by studs are also presented. Results from the three-dimensional analyses are compared to results from two-dimensional analyses of the same problems. 12 refs., 56 figs., 1 tab.« less
Spectral analysis of the Forel-Ule Ocean colour comparator scale
NASA Astrophysics Data System (ADS)
Wernand, M. R.; van der Woerd, H. J.
2010-04-01
François Alphonse Forel (1890) and Willi Ule (1892) composed a colour comparator scale, with tints varying from indigo-blue to coca-cola brown, to quantify the colour of natural waters, like seas, lakes and rivers. For each measurement, the observer compares the colour of the water above a submersed white disc (Secchi disc) with the hand-held scale of pre-defined colours. The scale can be well reproduced from a simple recipe for twenty-one coloured chemical solutions and because the ease of its use, the Forel-Ule (FU) scale has been applied globally and intensively by oceanographers and limnologists from the year 1890. Indeed, the archived FU data belong to the oldest oceanographic data sets and do contain information on the changes in geobiophysical properties of natural waters during the last century. In this article we describe the optical properties of the FU-scale and its ability to cover the colours of natural waters, as observed by the human eye. The recipe of the scale and its reproduction is described. The spectral transmission of the tubes, with belonging chromaticity coordinates, is presented. The FU scale, in all its simplicity, is found to be an adequate ocean colour comparator scale. The scale is well characterized, is stable and observations are reproducible. This supports the idea that the large historic data base of FU measurements is coherent and well calibrated. Moreover, the scale can be coupled to contemporary multi-spectral observations with hand-held and satellite-based spectrometers.
Santangelo, Valerio
2018-01-01
Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior. PMID:29535614
Santangelo, Valerio
2018-01-01
Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior.
Honeycomb: Visual Analysis of Large Scale Social Networks
NASA Astrophysics Data System (ADS)
van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.
The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.
Comparative evaluation of saliva collection methods for proteome analysis.
Golatowski, Claas; Salazar, Manuela Gesell; Dhople, Vishnu Mukund; Hammer, Elke; Kocher, Thomas; Jehmlich, Nico; Völker, Uwe
2013-04-18
Saliva collection devices are widely used for large-scale screening approaches. This study was designed to compare the suitability of three different whole-saliva collection approaches for subsequent proteome analyses. From 9 young healthy volunteers (4 women and 5 men) saliva samples were collected either unstimulated by passive drooling or stimulated using a paraffin gum or Salivette® (cotton swab). Saliva volume, protein concentration and salivary protein patterns were analyzed comparatively. Samples collected using paraffin gum showed the highest saliva volume (4.1±1.5 ml) followed by Salivette® collection (1.8±0.4 ml) and drooling (1.0±0.4 ml). Saliva protein concentrations (average 1145 μg/ml) showed no significant differences between the three sampling schemes. Each collection approach facilitated the identification of about 160 proteins (≥2 distinct peptides) per subject, but collection-method dependent variations in protein composition were observed. Passive drooling, paraffin gum and Salivette® each allows similar coverage of the whole saliva proteome, but the specific proteins observed depended on the collection approach. Thus, only one type of collection device should be used for quantitative proteome analysis in one experiment, especially when performing large-scale cross-sectional or multi-centric studies. Copyright © 2013 Elsevier B.V. All rights reserved.
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.
Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan
2013-06-27
Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html.
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing
2013-01-01
Background Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Results Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Conclusions Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html. PMID:23802613
Large-scale motions in the universe: Using clusters of galaxies as tracers
NASA Technical Reports Server (NTRS)
Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard
1995-01-01
Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).
Differentiating unipolar and bipolar depression by alterations in large-scale brain networks.
Goya-Maldonado, Roberto; Brodmann, Katja; Keil, Maria; Trost, Sarah; Dechent, Peter; Gruber, Oliver
2016-02-01
Misdiagnosing bipolar depression can lead to very deleterious consequences of mistreatment. Although depressive symptoms may be similarly expressed in unipolar and bipolar disorder, changes in specific brain networks could be very distinct, being therefore informative markers for the differential diagnosis. We aimed to characterize specific alterations in candidate large-scale networks (frontoparietal, cingulo-opercular, and default mode) in symptomatic unipolar and bipolar patients using resting state fMRI, a cognitively low demanding paradigm ideal to investigate patients. Networks were selected after independent component analysis, compared across 40 patients acutely depressed (20 unipolar, 20 bipolar), and 20 controls well-matched for age, gender, and education levels, and alterations were correlated to clinical parameters. Despite comparable symptoms, patient groups were robustly differentiated by large-scale network alterations. Differences were driven in bipolar patients by increased functional connectivity in the frontoparietal network, a central executive and externally-oriented network. Conversely, unipolar patients presented increased functional connectivity in the default mode network, an introspective and self-referential network, as much as reduced connectivity of the cingulo-opercular network to default mode regions, a network involved in detecting the need to switch between internally and externally oriented demands. These findings were mostly unaffected by current medication, comorbidity, and structural changes. Moreover, network alterations in unipolar patients were significantly correlated to the number of depressive episodes. Unipolar and bipolar groups displaying similar symptomatology could be clearly distinguished by characteristic changes in large-scale networks, encouraging further investigation of network fingerprints for clinical use. Hum Brain Mapp 37:808-818, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Application of Interface Technology in Nonlinear Analysis of a Stitched/RFI Composite Wing Stub Box
NASA Technical Reports Server (NTRS)
Wang, John T.; Ransom, Jonathan B.
1997-01-01
A recently developed interface technology was successfully employed in the geometrically nonlinear analysis of a full-scale stitched/RFI composite wing box loaded in bending. The technology allows mismatched finite element models to be joined in a variationally consistent manner and reduces the modeling complexity by eliminating transition meshing. In the analysis, local finite element models of nonlinearly deformed wide bays of the wing box are refined without the need for transition meshing to the surrounding coarse mesh. The COMET-AR finite element code, which has the interface technology capability, was used to perform the analyses. The COMET-AR analysis is compared to both a NASTRAN analysis and to experimental data. The interface technology solution is shown to be in good agreement with both. The viability of interface technology for coupled global/local analysis of large scale aircraft structures is demonstrated.
Remote sensing applied to numerical modelling. [water resources pollution
NASA Technical Reports Server (NTRS)
Sengupta, S.; Lee, S. S.; Veziroglu, T. N.; Bland, R.
1975-01-01
Progress and remaining difficulties in the construction of predictive mathematical models of large bodies of water as ecosystems are reviewed. Surface temperature is at present the only variable than can be measured accurately and reliably by remote sensing techniques, but satellite infrared data are of sufficient resolution for macro-scale modeling of oceans and large lakes, and airborne radiometers are useful in meso-scale analysis (of lakes, bays, and thermal plumes). Finite-element and finite-difference techniques applied to the solution of relevant coupled time-dependent nonlinear partial differential equations are compared, and the specific problem of the Biscayne Bay and environs ecosystem is tackled in a finite-differences treatment using the rigid-lid model and a rigid-line grid system.
Structural Analysis of Biodiversity
Sirovich, Lawrence; Stoeckle, Mark Y.; Zhang, Yu
2010-01-01
Large, recently-available genomic databases cover a wide range of life forms, suggesting opportunity for insights into genetic structure of biodiversity. In this study we refine our recently-described technique using indicator vectors to analyze and visualize nucleotide sequences. The indicator vector approach generates correlation matrices, dubbed Klee diagrams, which represent a novel way of assembling and viewing large genomic datasets. To explore its potential utility, here we apply the improved algorithm to a collection of almost 17000 DNA barcode sequences covering 12 widely-separated animal taxa, demonstrating that indicator vectors for classification gave correct assignment in all 11000 test cases. Indicator vector analysis revealed discontinuities corresponding to species- and higher-level taxonomic divisions, suggesting an efficient approach to classification of organisms from poorly-studied groups. As compared to standard distance metrics, indicator vectors preserve diagnostic character probabilities, enable automated classification of test sequences, and generate high-information density single-page displays. These results support application of indicator vectors for comparative analysis of large nucleotide data sets and raise prospect of gaining insight into broad-scale patterns in the genetic structure of biodiversity. PMID:20195371
Resolving the substructure of molecular clouds in the LMC
NASA Astrophysics Data System (ADS)
Wong, Tony; Hughes, Annie; Tokuda, Kazuki; Indebetouw, Remy; Wojciechowski, Evan; Bandurski, Jeffrey; MC3 Collaboration
2018-01-01
We present recent wide-field CO and 13CO mapping of giant molecular clouds in the Large Magellanic Cloud with ALMA. Our sample exhibits diverse star-formation properties, and reveals comparably diverse molecular cloud properties including surface density and velocity dispersion at a given scale. We first present the results of a recent study comparing two GMCs at the extreme ends of the star formation activity spectrum. Our quiescent cloud exhibits 10 times lower surface density and 5 times lower velocity dispersion than the active 30 Doradus cloud, yet in both clouds we find a wide range of line widths at the smallest resolved scales, spanning nearly the full range of line widths seen at all scales. This suggests an important role for feedback on sub-parsec scales, while the energetics on larger scales are dominated by clump-to-clump relative velocities. We then extend our analysis to four additional clouds that exhibit intermediate levels of star formation activity.
Annear, Michael J; Eccleston, Claire E; McInerney, Frances J; Elliott, Kate-Ellen J; Toye, Christine M; Tranter, Bruce K; Robinson, Andrew L
2016-06-01
To compare the psychometric performance of the Dementia Knowledge Assessment Scale (DKAS) and the Alzheimer's Disease Knowledge Scale (ADKS) when administered to a large international cohort before and after online dementia education. Comparative psychometric analysis with pre- and posteducation scale responses. The setting for this research encompassed 7,909 individuals from 124 countries who completed the 9-week Understanding Dementia Massive Open Online Course (MOOC). Volunteer respondents who completed the DKAS and ADKS before (n = 3,649) and after (n = 878) completion of the Understanding Dementia MOOC. Assessment and comparison of the DKAS and ADKS included evaluation of scale development procedures, interscale correlations, response distribution, internal consistency, and construct validity. The DKAS had superior internal consistency, wider response distribution with less ceiling effect, and better discrimination between pre- and posteducation scores and occupational cohorts than the ADKS. The 27-item DKAS is a reliable and preliminarily valid measure of dementia knowledge that is psychometrically and conceptually sound, overcomes limitations of existing instruments, and can be administered to diverse cohorts to measure baseline understanding and knowledge change. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Large-scale dynamos in rapidly rotating plane layer convection
NASA Astrophysics Data System (ADS)
Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.
2018-05-01
Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.
Solving large scale structure in ten easy steps with COLA
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
Volatility return intervals analysis of the Japanese market
NASA Astrophysics Data System (ADS)
Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.
2008-03-01
We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis
ERIC Educational Resources Information Center
Chow, Kui Foon; Kennedy, Kerry John
2014-01-01
International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…
NASA Technical Reports Server (NTRS)
Dolan, R. (Principal Investigator); Heywood, J. E.
1975-01-01
The author has identified the following significant results. Areas of the coast were selected which have historically shown vulnerability to storm damage. On Assateague Island, there are six such areas: the western tip of Fishing Point; the southern National Park Service parking lot; Smith Hammocks; Fox Hill Level; Little Fox Hill Level; and the northern five kilometers of the island. Comparative analysis of these areas with other data and with large and small scale aerial photographs shows them to be associated with large overwash fans and shoreline cusping. Initial analysis of Landsat imagery and high altitude photography indicates that the areas of high vulnerability are also associated with the angular orientation of the shoreline.
Intermediate-scale plasma irregularities in the polar ionosphere inferred from GPS radio occultation
NASA Astrophysics Data System (ADS)
Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O.; Butala, M. D.; Mannucci, A. J.
2015-02-01
We report intermediate-scale plasma irregularities in the polar ionosphere inferred from high-resolution radio occultation (RO) measurements using GPS (Global Positioning System) to CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) satellite radio links. The high inclination of CASSIOPE and the high rate of signal reception by the GPS Attitude, Positioning, and Profiling RO receiver on CASSIOPE enable a high-resolution investigation of the dynamics of the polar ionosphere with unprecedented detail. Intermediate-scale, scintillation-producing irregularities, which correspond to 1 to 40 km scales, were inferred by applying multiscale spectral analysis on the RO phase measurements. Using our multiscale spectral analysis approach and satellite data (Polar Operational Environmental Satellites and Defense Meteorological Satellite Program), we discovered that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap. We found that large length scales and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap implying that the irregularity scales and phase scintillation characteristics are a function of the solar wind and magnetospheric forcings.
NASA Technical Reports Server (NTRS)
Jackson, Karen E.
1990-01-01
Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.
Dahling, Daniel R
2002-01-01
Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.
NASA Astrophysics Data System (ADS)
Sun, P.; Jokipii, J. R.; Giacalone, J.
2016-12-01
Anisotropies in astrophysical turbulence has been proposed and observed for a long time. And recent observations adopting the multi-scale analysis techniques provided a detailed description of the scale-dependent power spectrum of the magnetic field parallel and perpendicular to the scale-dependent magnetic field line at different scales in the solar wind. In the previous work, we proposed a multi-scale method to synthesize non-isotropic turbulent magnetic field with pre-determined power spectra of the fluctuating magnetic field as a function of scales. We present the effect of test particle transport in the resulting field with a two-scale algorithm. We find that the scale-dependent turbulence anisotropy has a significant difference in the effect on charged par- ticle transport from what the isotropy or the global anisotropy has. It is important to apply this field synthesis method to the solar wind magnetic field based on spacecraft data. However, this relies on how we extract the power spectra of the turbulent magnetic field across different scales. In this study, we propose here a power spectrum synthesis method based on Fourier analysis to extract the large and small scale power spectrum from a single spacecraft observation with a long enough period and a high sampling frequency. We apply the method to the solar wind measurement by the magnetometer onboard the ACE spacecraft and regenerate the large scale isotropic 2D spectrum and the small scale anisotropic 2D spectrum. We run test particle simulations in the magnetid field generated in this way to estimate the transport coefficients and to compare with the isotropic turbulence model.
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Large-scale anisotropy of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, J.; Wilson, M. L.
1981-01-01
Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.
Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin
2013-01-01
Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509
Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin
2014-01-30
The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Wu, Di; Lau, K.- M.; Tao, Wei-Kuo
2016-01-01
Large-scale forcing and land-atmosphere interactions on precipitation are investigated with NASA-Unified WRF (NU-WRF) simulations during fast transitions of ENSO phases from spring to early summer of 2010 and 2011. The model is found to capture major precipitation episodes in the 3-month simulations without resorting to nudging. However, the mean intensity of the simulated precipitation is underestimated by 46% and 57% compared with the observations in dry and wet regions in the southwestern and south-central United States, respectively. Sensitivity studies show that large-scale atmospheric forcing plays a major role in producing regional precipitation. A methodology to account for moisture contributions to individual precipitation events, as well as total precipitation, is presented under the same moisture budget framework. The analysis shows that the relative contributions of local evaporation and large-scale moisture convergence depend on the dry/wet regions and are a function of temporal and spatial scales. While the ratio of local and large-scale moisture contributions vary with domain size and weather system, evaporation provides a major moisture source in the dry region and during light rain events, which leads to greater sensitivity to soil moisture in the dry region and during light rain events. The feedback of land surface processes to large-scale forcing is well simulated, as indicated by changes in atmospheric circulation and moisture convergence. Overall, the results reveal an asymmetrical response of precipitation events to soil moisture, with higher sensitivity under dry than wet conditions. Drier soil moisture tends to suppress further existing below-normal precipitation conditions via a positive soil moisture-land surface flux feedback that could worsen drought conditions in the southwestern United States.
Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe
2016-07-01
We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.
Mouse Activity across Time Scales: Fractal Scenarios
Lima, G. Z. dos Santos; Lobão-Soares, B.; do Nascimento, G. C.; França, Arthur S. C.; Muratori, L.; Ribeiro, S.; Corso, G.
2014-01-01
In this work we devise a classification of mouse activity patterns based on accelerometer data using Detrended Fluctuation Analysis. We use two characteristic mouse behavioural states as benchmarks in this study: waking in free activity and slow-wave sleep (SWS). In both situations we find roughly the same pattern: for short time intervals we observe high correlation in activity - a typical 1/f complex pattern - while for large time intervals there is anti-correlation. High correlation of short intervals ( to : waking state and to : SWS) is related to highly coordinated muscle activity. In the waking state we associate high correlation both to muscle activity and to mouse stereotyped movements (grooming, waking, etc.). On the other side, the observed anti-correlation over large time scales ( to : waking state and to : SWS) during SWS appears related to a feedback autonomic response. The transition from correlated regime at short scales to an anti-correlated regime at large scales during SWS is given by the respiratory cycle interval, while during the waking state this transition occurs at the time scale corresponding to the duration of the stereotyped mouse movements. Furthermore, we find that the waking state is characterized by longer time scales than SWS and by a softer transition from correlation to anti-correlation. Moreover, this soft transition in the waking state encompass a behavioural time scale window that gives rise to a multifractal pattern. We believe that the observed multifractality in mouse activity is formed by the integration of several stereotyped movements each one with a characteristic time correlation. Finally, we compare scaling properties of body acceleration fluctuation time series during sleep and wake periods for healthy mice. Interestingly, differences between sleep and wake in the scaling exponents are comparable to previous works regarding human heartbeat. Complementarily, the nature of these sleep-wake dynamics could lead to a better understanding of neuroautonomic regulation mechanisms. PMID:25275515
Tools for Large-Scale Mobile Malware Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierma, Michael
Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000more » Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.« less
HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.
Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J
2016-06-03
Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .
ERIC Educational Resources Information Center
Ojerinde, Dibu; Popoola, Omokunmi; Onyeneho, Patrick; Egberongbe, Aminat
2016-01-01
Statistical procedure used in adjusting test score difficulties on test forms is known as "equating". Equating makes it possible for various test forms to be used interchangeably. In terms of where the equating method fits in the assessment cycle, there are pre-equating and post-equating methods. The major benefits of pre-equating, when…
Using a Mixture IRT Model to Understand English Learner Performance on Large-Scale Assessments
ERIC Educational Resources Information Center
Shea, Christine A.
2013-01-01
The purpose of this study was to determine whether an eighth grade state-level math assessment contained items that function differentially (DIF) for English Learner students (EL) as compared to English Only students (EO) and if so, what factors might have caused DIF. To determine this, Differential Item Functioning (DIF) analysis was employed.…
The Academic Profession in Russia's Two Capitals: The Impact of 20 Years of Transition
ERIC Educational Resources Information Center
Sivak, Elizaveta; Yudkevich, Maria
2017-01-01
This paper studies the dynamics of key characteristics of the academic profession in Russia based on the analysis of university faculty in the two largest cities in Russia--Moscow and St. Petersburg. We use data on Russian university faculty from two large-scale comparative studies of the academic profession ("The Carnegie Study" carried…
2009-06-01
simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE
Towards the computation of time-periodic inertial range dynamics
NASA Astrophysics Data System (ADS)
van Veen, L.; Vela-Martín, A.; Kawahara, G.
2018-04-01
We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.
ERIC Educational Resources Information Center
Wendt, Heike; Bos, Wilfried; Goy, Martin
2011-01-01
Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…
Linking Large-Scale Reading Assessments: Measuring International Trends over 40 Years
ERIC Educational Resources Information Center
Strietholt, Rolf; Rosén, Monica
2016-01-01
Since the start of the new millennium, international comparative large-scale studies have become one of the most well-known areas in the field of education. However, the International Association for the Evaluation of Educational Achievement (IEA) has already been conducting international comparative studies for about half a century. The present…
DEMNUni: massive neutrinos and the bispectrum of large scale structures
NASA Astrophysics Data System (ADS)
Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano
2018-03-01
The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.
The impact of stellar feedback on the density and velocity structure of the interstellar medium
NASA Astrophysics Data System (ADS)
Grisdale, Kearn; Agertz, Oscar; Romeo, Alessandro B.; Renaud, Florent; Read, Justin I.
2017-04-01
We study the impact of stellar feedback in shaping the density and velocity structure of neutral hydrogen (H I) in disc galaxies. For our analysis, we carry out ˜4.6 pc resolution N-body+adaptive mesh refinement hydrodynamic simulations of isolated galaxies, set up to mimic a Milky Way and a Large and Small Magellanic Cloud. We quantify the density and velocity structure of the interstellar medium using power spectra and compare the simulated galaxies to observed H I in local spiral galaxies from THINGS (The H I Nearby Galaxy Survey). Our models with stellar feedback give an excellent match to the observed THINGS H I density power spectra. We find that kinetic energy power spectra in feedback-regulated galaxies, regardless of galaxy mass and size, show scalings in excellent agreement with supersonic turbulence (E(k) ∝ k-2) on scales below the thickness of the H I layer. We show that feedback influences the gas density field, and drives gas turbulence, up to large (kpc) scales. This is in stark contrast to density fields generated by large-scale gravity-only driven turbulence. We conclude that the neutral gas content of galaxies carries signatures of stellar feedback on all scales.
NASA Astrophysics Data System (ADS)
Hristova-Veleva, S.; Chao, Y.; Vane, D.; Lambrigtsen, B.; Li, P. P.; Knosp, B.; Vu, Q. A.; Su, H.; Dang, V.; Fovell, R.; Tanelli, S.; Garay, M.; Willis, J.; Poulsen, W.; Fishbein, E.; Ao, C. O.; Vazquez, J.; Park, K. J.; Callahan, P.; Marcus, S.; Haddad, Z.; Fetzer, E.; Kahn, R.
2007-12-01
In spite of recent improvements in hurricane track forecast accuracy, currently there are still many unanswered questions about the physical processes that determine hurricane genesis, intensity, track and impact on large- scale environment. Furthermore, a significant amount of work remains to be done in validating hurricane forecast models, understanding their sensitivities and improving their parameterizations. None of this can be accomplished without a comprehensive set of multiparameter observations that are relevant to both the large- scale and the storm-scale processes in the atmosphere and in the ocean. To address this need, we have developed a prototype of a comprehensive hurricane information system of high- resolution satellite, airborne and in-situ observations and model outputs pertaining to: i) the thermodynamic and microphysical structure of the storms; ii) the air-sea interaction processes; iii) the larger-scale environment as depicted by the SST, ocean heat content and the aerosol loading of the environment. Our goal was to create a one-stop place to provide the researchers with an extensive set of observed hurricane data, and their graphical representation, together with large-scale and convection-resolving model output, all organized in an easy way to determine when coincident observations from multiple instruments are available. Analysis tools will be developed in the next step. The analysis tools will be used to determine spatial, temporal and multiparameter covariances that are needed to evaluate model performance, provide information for data assimilation and characterize and compare observations from different platforms. We envision that the developed hurricane information system will help in the validation of the hurricane models, in the systematic understanding of their sensitivities and in the improvement of the physical parameterizations employed by the models. Furthermore, it will help in studying the physical processes that affect hurricane development and impact on large-scale environment. This talk will describe the developed prototype of the hurricane information systems. Furthermore, we will use a set of WRF hurricane simulations and compare simulated to observed structures to illustrate how the information system can be used to discriminate between simulations that employ different physical parameterizations. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics ans Space Administration.
ERIC Educational Resources Information Center
Steiner-Khamsi, Gita; Appleton, Margaret; Vellani, Shezleen
2018-01-01
The media analysis is situated in the larger body of studies that explore the varied reasons why different policy actors advocate for international large-scale student assessments (ILSAs) and adds to the research on the fast advance of the global education industry. The analysis of "The Economist," "Financial Times," and…
ERIC Educational Resources Information Center
Hampden-Thompson, Gillian; Lubben, Fred; Bennett, Judith
2011-01-01
Quantitative secondary analysis of large-scale data can be combined with in-depth qualitative methods. In this paper, we discuss the role of this combined methods approach in examining the uptake of physics and chemistry in post compulsory schooling for students in England. The secondary data analysis of the National Pupil Database (NPD) served…
Sensitivity analysis for large-scale problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Performance of lap splices in large-scale column specimens affected by ASR and/or DEF.
DOT National Transportation Integrated Search
2012-06-01
This research program conducted a large experimental program, which consisted of the design, construction, : curing, deterioration, and structural load testing of 16 large-scale column specimens with a critical lap splice : region, and then compared ...
Planck intermediate results. XLII. Large-scale Galactic magnetic fields
NASA Astrophysics Data System (ADS)
Planck Collaboration; Adam, R.; Ade, P. A. R.; Alves, M. I. R.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chiang, H. C.; Christensen, P. R.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dolag, K.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Ferrière, K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Ghosh, T.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hobson, M.; Hornstrup, A.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Oppermann, N.; Orlando, E.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Pasian, F.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-12-01
Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering in the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.
Quantitative analysis of voids in percolating structures in two-dimensional N-body simulations
NASA Technical Reports Server (NTRS)
Harrington, Patrick M.; Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
We present in this paper a quantitative method for defining void size in large-scale structure based on percolation threshold density. Beginning with two-dimensional gravitational clustering simulations smoothed to the threshold of nonlinearity, we perform percolation analysis to determine the large scale structure. The resulting objective definition of voids has a natural scaling property, is topologically interesting, and can be applied immediately to redshift surveys.
Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach
Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A.
2016-01-01
Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain–computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a “containerized” approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data “Levels,” each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org). PMID:27014048
Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa
2015-01-01
Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Yu, Garmay; A, Shvetsov; D, Karelov; D, Lebedev; A, Radulescu; M, Petukhov; V, Isaev-Ivanov
2012-02-01
Based on X-ray crystallographic data available at Protein Data Bank, we have built molecular dynamics (MD) models of homologous recombinases RecA from E. coli and D. radiodurans. Functional form of RecA enzyme, which is known to be a long helical filament, was approximated by a trimer, simulated in periodic water box. The MD trajectories were analyzed in terms of large-scale conformational motions that could be detectable by neutron and X-ray scattering techniques. The analysis revealed that large-scale RecA monomer dynamics can be described in terms of relative motions of 7 subdomains. Motion of C-terminal domain was the major contributor to the overall dynamics of protein. Principal component analysis (PCA) of the MD trajectories in the atom coordinate space showed that rotation of C-domain is correlated with the conformational changes in the central domain and N-terminal domain, that forms the monomer-monomer interface. Thus, even though C-terminal domain is relatively far from the interface, its orientation is correlated with large-scale filament conformation. PCA of the trajectories in the main chain dihedral angle coordinate space implicates a co-existence of a several different large-scale conformations of the modeled trimer. In order to clarify the relationship of independent domain orientation with large-scale filament conformation, we have performed analysis of independent domain motion and its implications on the filament geometry.
Method for revealing biases in precision mass measurements
NASA Astrophysics Data System (ADS)
Vabson, V.; Vendt, R.; Kübarsepp, T.; Noorma, M.
2013-02-01
A practical method for the quantification of systematic errors of large-scale automatic comparators is presented. This method is based on a comparison of the performance of two different comparators. First, the differences of 16 equal partial loads of 1 kg are measured with a high-resolution mass comparator featuring insignificant bias and 1 kg maximum load. At the second stage, a large-scale comparator is tested by using combined loads with known mass differences. Comparing the different results, the biases of any comparator can be easily revealed. These large-scale comparator biases are determined over a 16-month period, and for the 1 kg loads, a typical pattern of biases in the range of ±0.4 mg is observed. The temperature differences recorded inside the comparator concurrently with mass measurements are found to remain within a range of ±30 mK, which obviously has a minor effect on the detected biases. Seasonal variations imply that the biases likely arise mainly due to the functioning of the environmental control at the measurement location.
Low energy peripheral scaling in nucleon-nucleon scattering and uncertainty quantification
NASA Astrophysics Data System (ADS)
Ruiz Simo, I.; Amaro, J. E.; Ruiz Arriola, E.; Navarro Pérez, R.
2018-03-01
We analyze the peripheral structure of the nucleon-nucleon interaction for LAB energies below 350 MeV. To this end we transform the scattering matrix into the impact parameter representation by analyzing the scaled phase shifts (L + 1/2) δ JLS (p) and the scaled mixing parameters (L + 1/2)ɛ JLS (p) in terms of the impact parameter b = (L + 1/2)/p. According to the eikonal approximation, at large angular momentum L these functions should become an universal function of b, independent on L. This allows to discuss in a rather transparent way the role of statistical and systematic uncertainties in the different long range components of the two-body potential. Implications for peripheral waves obtained in chiral perturbation theory interactions to fifth order (N5LO) or from the large body of NN data considered in the SAID partial wave analysis are also drawn from comparing them with other phenomenological high-quality interactions, constructed to fit scattering data as well. We find that both N5LO and SAID peripheral waves disagree more than 5σ with the Granada-2013 statistical analysis, more than 2σ with the 6 statistically equivalent potentials fitting the Granada-2013 database and about 1σ with the historical set of 13 high-quality potentials developed since the 1993 Nijmegen analysis.
Macroecological patterns of phytoplankton in the northwestern North Atlantic Ocean.
Li, W K W
2002-09-12
Many issues in biological oceanography are regional or global in scope; however, there are not many data sets of extensive areal coverage for marine plankton. In microbial ecology, a fruitful approach to large-scale questions is comparative analysis wherein statistical data patterns are sought from different ecosystems, frequently assembled from unrelated studies. A more recent approach termed macroecology characterizes phenomena emerging from large numbers of biological units by emphasizing the shapes and boundaries of statistical distributions, because these reflect the constraints on variation. Here, I use a set of flow cytometric measurements to provide macroecological perspectives on North Atlantic phytoplankton communities. Distinct trends of abundance in picophytoplankton and both small and large nanophytoplankton underlaid two patterns. First, total abundance of the three groups was related to assemblage mean-cell size according to the 3/4 power law of allometric scaling in biology. Second, cytometric diversity (an ataxonomic measure of assemblage entropy) was maximal at intermediate levels of water column stratification. Here, intermediate disturbance shapes diversity through an equitable distribution of cells in size classes, from which arises a high overall biomass. By subsuming local fluctuations, macroecology reveals meaningful patterns of phytoplankton at large scales.
A comparison of obsessive-compulsive personality disorder scales.
Samuel, Douglas B; Widiger, Thomas A
2010-05-01
In this study, we utilized a large undergraduate sample (N = 536), oversampled for the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision [DSM-IV-TR]; American Psychiatric Association, 2000) obsessive-compulsive personality disorder (OCPD) pathology, to compare 8 self-report measures of OCPD. No prior study has compared more than 3 measures, and the results indicate that the scales had only moderate convergent validity. We also went beyond the existing literature to compare these scales to 2 external reference points: their relationships with a well-established measure of the five-factor model of personality (FFM) and clinicians' ratings of their coverage of the DSM-IV-TR criterion set. When the FFM was used as a point of comparison, the results suggest important differences among the measures with respect to their divergent representation of conscientiousness, neuroticism, and agreeableness. Additionally, an analysis of the construct coverage indicated that the measures also varied in terms of their representation of particular diagnostic criteria. For example, whereas some scales contained items distributed across the diagnostic criteria, others were concentrated more heavily on particular features of the DSM-IV-TR disorder.
Winfree, Seth; Dagher, Pierre C; Dunn, Kenneth W; Eadon, Michael T; Ferkowicz, Michael; Barwinska, Daria; Kelly, Katherine J; Sutton, Timothy A; El-Achkar, Tarek M
2018-06-05
Kidney biopsy remains the gold standard for uncovering the pathogenesis of acute and chronic kidney diseases. However, the ability to perform high resolution, quantitative, molecular and cellular interrogation of this precious tissue is still at a developing stage compared to other fields such as oncology. Here, we discuss recent advances in performing large-scale, three-dimensional (3D), multi-fluorescence imaging of kidney biopsies and quantitative analysis referred to as 3D tissue cytometry. This approach allows the accurate measurement of specific cell types and their spatial distribution in a thick section spanning the entire length of the biopsy. By uncovering specific disease signatures, including rare occurrences, and linking them to the biology in situ, this approach will enhance our understanding of disease pathogenesis. Furthermore, by providing accurate quantitation of cellular events, 3D cytometry may improve the accuracy of prognosticating the clinical course and response to therapy. Therefore, large-scale 3D imaging and cytometry of kidney biopsy is poised to become a bridge towards personalized medicine for patients with kidney disease. © 2018 S. Karger AG, Basel.
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
NASA Astrophysics Data System (ADS)
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
A Comparative Analysis of Coprologic Diagnostic Methods for Detection of Toxoplama gondii in Cats
Salant, Harold; Spira, Dan T.; Hamburger, Joseph
2010-01-01
The relative role of transmission of Toxoplasma gondii infection from cats to humans appears to have recently increased in certain areas. Large-scale screening of oocyst shedding in cats cannot rely on microscopy because oocyst identification lacks sensitivity and specificity, or on bioassays, which require test animals and weeks before examination. We compared a sensitive and species-specific coprologic–polymerase chain reaction (copro-PCR) for detection of T. gondii infected cats with microscopy and a bioassay. In experimentally infected cats followed over time, microscopy was positive occasionally, and positive copro-PCR and bioassay results were obtained continuously from days 2 to 24 post-infection. The copro-PCR is at least as sensitive and specific as the bioassay and is capable of detecting infective oocysts during cat infection. Therefore, this procedure can be used as the new gold standard for determining potential cat infectivity. Its technologic advantages over the bioassay make it superior for large-scale screening of cats. PMID:20439968
Proteinortho: Detection of (Co-)orthologs in large-scale analysis
2011-01-01
Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987
Bathymetric comparisons adjacent to the Louisiana barrier islands: Processes of large-scale change
List, J.H.; Jaffe, B.E.; Sallenger, A.H.; Hansen, M.E.
1997-01-01
This paper summarizes the results of a comparative bathymetric study encompassing 150 km of the Louisiana barrier-island coast. Bathymetric data surrounding the islands and extending to 12 m water depth were processed from three survey periods: the 1880s, the 1930s, and the 1980s. Digital comparisons between surveys show large-scale, coherent patterns of sea-floor erosion and accretion related to the rapid erosion and disintegration of the islands. Analysis of the sea-floor data reveals two primary processes driving this change: massive longshore transport, in the littoral zone and at shoreface depths; and increased sediment storage in ebb-tidal deltas. Relative sea-level rise, although extraordinarily high in the study area, is shown to be an indirect factor in causing the area's rapid shoreline retreat rates.
Large-scale variations in observed Antarctic Sea ice extent and associated atmospheric circulation
NASA Technical Reports Server (NTRS)
Cavalieri, D. J.; Parkinson, C. L.
1981-01-01
The 1974 Antarctic large scale sea ice extent is studied from data from Nimbus 2 and 5 and temperature and sea level pressure fields from the Australian Meteorological Data Set. Electrically Scanning Microwave Radiometer data were three-day averaged and compared with 1000 mbar atmospheric pressure and sea level pressure data, also in three-day averages. Each three-day period was subjected to a Fourier analysis and included the mean latitude of the ice extent and the phases and percent variances in terms of the first six Fourier harmonics. Centers of low pressure were found to be generally east of regions which displayed rapid ice growth, and winds acted to extend the ice equatorward. An atmospheric response was also noted as caused by the changing ice cover.
BiGG: a Biochemical Genetic and Genomic knowledgebase of large scale metabolic reconstructions
2010-01-01
Background Genome-scale metabolic reconstructions under the Constraint Based Reconstruction and Analysis (COBRA) framework are valuable tools for analyzing the metabolic capabilities of organisms and interpreting experimental data. As the number of such reconstructions and analysis methods increases, there is a greater need for data uniformity and ease of distribution and use. Description We describe BiGG, a knowledgebase of Biochemically, Genetically and Genomically structured genome-scale metabolic network reconstructions. BiGG integrates several published genome-scale metabolic networks into one resource with standard nomenclature which allows components to be compared across different organisms. BiGG can be used to browse model content, visualize metabolic pathway maps, and export SBML files of the models for further analysis by external software packages. Users may follow links from BiGG to several external databases to obtain additional information on genes, proteins, reactions, metabolites and citations of interest. Conclusions BiGG addresses a need in the systems biology community to have access to high quality curated metabolic models and reconstructions. It is freely available for academic use at http://bigg.ucsd.edu. PMID:20426874
Wang, Lu-Yong; Fasulo, D
2006-01-01
Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.
NASA Astrophysics Data System (ADS)
Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.
2018-03-01
This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields from the global models along the CMAQ boundaries. Using boundary conditions from AM3 yielded higher springtime ozone columns burdens in the middle and lower troposphere compared to boundary conditions from the other models. For surface ozone, the differences between the AM3-driven CMAQ simulations and the CMAQ simulations driven by other large-scale models are especially pronounced during spring and winter where they can reach more than 10 ppb for seasonal mean ozone mixing ratios and as much as 15 ppb for domain-averaged daily maximum 8 h average ozone on individual days. In contrast, the differences between the C-IFS-, GEOS-Chem-, and H-CMAQ-driven regional-scale CMAQ simulations are typically smaller. Comparing simulated surface ozone mixing ratios to observations and computing seasonal and regional model performance statistics revealed that boundary conditions can have a substantial impact on model performance. Further analysis showed that boundary conditions can affect model performance across the entire range of the observed distribution, although the impacts tend to be lower during summer and for the very highest observed percentiles. The results are discussed in the context of future model development and analysis opportunities.
Xu, Weijia; Ozer, Stuart; Gutell, Robin R
2009-01-01
With an increasingly large amount of sequences properly aligned, comparative sequence analysis can accurately identify not only common structures formed by standard base pairing but also new types of structural elements and constraints. However, traditional methods are too computationally expensive to perform well on large scale alignment and less effective with the sequences from diversified phylogenetic classifications. We propose a new approach that utilizes coevolutional rates among pairs of nucleotide positions using phylogenetic and evolutionary relationships of the organisms of aligned sequences. With a novel data schema to manage relevant information within a relational database, our method, implemented with a Microsoft SQL Server 2005, showed 90% sensitivity in identifying base pair interactions among 16S ribosomal RNA sequences from Bacteria, at a scale 40 times bigger and 50% better sensitivity than a previous study. The results also indicated covariation signals for a few sets of cross-strand base stacking pairs in secondary structure helices, and other subtle constraints in the RNA structure.
Xu, Weijia; Ozer, Stuart; Gutell, Robin R.
2010-01-01
With an increasingly large amount of sequences properly aligned, comparative sequence analysis can accurately identify not only common structures formed by standard base pairing but also new types of structural elements and constraints. However, traditional methods are too computationally expensive to perform well on large scale alignment and less effective with the sequences from diversified phylogenetic classifications. We propose a new approach that utilizes coevolutional rates among pairs of nucleotide positions using phylogenetic and evolutionary relationships of the organisms of aligned sequences. With a novel data schema to manage relevant information within a relational database, our method, implemented with a Microsoft SQL Server 2005, showed 90% sensitivity in identifying base pair interactions among 16S ribosomal RNA sequences from Bacteria, at a scale 40 times bigger and 50% better sensitivity than a previous study. The results also indicated covariation signals for a few sets of cross-strand base stacking pairs in secondary structure helices, and other subtle constraints in the RNA structure. PMID:20502534
NASA Technical Reports Server (NTRS)
Han, Qingyuan; Rossow, William B.; Chou, Joyce; Welch, Ronald M.
1997-01-01
Cloud microphysical parameterizations have attracted a great deal of attention in recent years due to their effect on cloud radiative properties and cloud-related hydrological processes in large-scale models. The parameterization of cirrus particle size has been demonstrated as an indispensable component in the climate feedback analysis. Therefore, global-scale, long-term observations of cirrus particle sizes are required both as a basis of and as a validation of parameterizations for climate models. While there is a global scale, long-term survey of water cloud droplet sizes (Han et al.), there is no comparable study for cirrus ice crystals. This study is an effort to supply such a data set.
ERIC Educational Resources Information Center
Töytäri, Aija; Piirainen, Arja; Tynjälä, Päivi; Vanhanen-Nuutinen, Liisa; Mäki, Kimmo; Ilves, Vesa
2016-01-01
In this large-scale study, higher education teachers' descriptions of their own learning were examined with qualitative analysis involving application of principles of phenomenographic research. This study is unique: it is unusual to use large-scale data in qualitative studies. The data were collected through an e-mail survey sent to 5960 teachers…
Solving large scale structure in ten easy steps with COLA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less
ORNL Pre-test Analyses of A Large-scale Experiment in STYLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Paul T; Yin, Shengjun; Klasky, Hilda B
Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite-element solutions and to also assess the level of confidence that can be placed in the best-estimate finiteelement solutions.« less
Large- and small-scale constraints on power spectra in Omega = 1 universes
NASA Technical Reports Server (NTRS)
Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.
1993-01-01
The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.
A comparative analysis of forest cover and catchment water yield relationships in northern China
Shuai Wang; Bo-Jie Fu; Chan-Sheng He; Ge Sun; Guang-Yao Gao
2011-01-01
During the past few decades, China has implemented several large-scale forestation programs that have increased forest cover from 16.0% in the 1980s to 20.4% in 2009. In northern China, water is the most sensitive and limiting ecological factor. Understanding the dynamic interactions between forest ecosystems and water in different regions is essential for maximizing...
ERIC Educational Resources Information Center
Chow, Christina M.
2011-01-01
Maintaining a competitive edge within the 21st century is dependent on the cultivation of human capital, producing qualified and innovative employees capable of competing within the new global marketplace. Technological advancements in communications technology as well as large scale, infrastructure development has led to a leveled playing field…
NASA Astrophysics Data System (ADS)
Kayastha, Shilva; Kunimoto, Ryo; Horvath, Dragos; Varnek, Alexandre; Bajorath, Jürgen
2017-11-01
The analysis of structure-activity relationships (SARs) becomes rather challenging when large and heterogeneous compound data sets are studied. In such cases, many different compounds and their activities need to be compared, which quickly goes beyond the capacity of subjective assessments. For a comprehensive large-scale exploration of SARs, computational analysis and visualization methods are required. Herein, we introduce a two-layered SAR visualization scheme specifically designed for increasingly large compound data sets. The approach combines a new compound pair-based variant of generative topographic mapping (GTM), a machine learning approach for nonlinear mapping, with chemical space networks (CSNs). The GTM component provides a global view of the activity landscapes of large compound data sets, in which informative local SAR environments are identified, augmented by a numerical SAR scoring scheme. Prioritized local SAR regions are then projected into CSNs that resolve these regions at the level of individual compounds and their relationships. Analysis of CSNs makes it possible to distinguish between regions having different SAR characteristics and select compound subsets that are rich in SAR information.
Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R
2008-01-01
A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.
Effect of extreme data loss on heart rate signals quantified by entropy analysis
NASA Astrophysics Data System (ADS)
Li, Yu; Wang, Jun; Li, Jin; Liu, Dazhao
2015-02-01
The phenomenon of data loss always occurs in the analysis of large databases. Maintaining the stability of analysis results in the event of data loss is very important. In this paper, we used a segmentation approach to generate a synthetic signal that is randomly wiped from data according to the Gaussian distribution and the exponential distribution of the original signal. Then, the logistic map is used as verification. Finally, two methods of measuring entropy-base-scale entropy and approximate entropy-are comparatively analyzed. Our results show the following: (1) Two key parameters-the percentage and the average length of removed data segments-can change the sequence complexity according to logistic map testing. (2) The calculation results have preferable stability for base-scale entropy analysis, which is not sensitive to data loss. (3) The loss percentage of HRV signals should be controlled below the range (p = 30 %), which can provide useful information in clinical applications.
A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.
Rutledge, Robert G
2011-03-02
Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.
A Java Program for LRE-Based Real-Time qPCR that Enables Large-Scale Absolute Quantification
Rutledge, Robert G.
2011-01-01
Background Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Findings Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. Conclusions The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples. PMID:21407812
NASA Astrophysics Data System (ADS)
Zhang, Yangyue; Hu, Ruifeng; Zheng, Xiaojing
2018-04-01
Dust particles can remain suspended in the atmospheric boundary layer, motions of which are primarily determined by turbulent diffusion and gravitational settling. Little is known about the spatial organizations of suspended dust concentration and how turbulent coherent motions contribute to the vertical transport of dust particles. Numerous studies in recent years have revealed that large- and very-large-scale motions in the logarithmic region of laboratory-scale turbulent boundary layers also exist in the high Reynolds number atmospheric boundary layer, but their influence on dust transport is still unclear. In this study, numerical simulations of dust transport in a neutral atmospheric boundary layer based on an Eulerian modeling approach and large-eddy simulation technique are performed to investigate the coherent structures of dust concentration. The instantaneous fields confirm the existence of very long meandering streaks of dust concentration, with alternating high- and low-concentration regions. A strong negative correlation between the streamwise velocity and concentration and a mild positive correlation between the vertical velocity and concentration are observed. The spatial length scales and inclination angles of concentration structures are determined, compared with their flow counterparts. The conditionally averaged fields vividly depict that high- and low-concentration events are accompanied by a pair of counter-rotating quasi-streamwise vortices, with a downwash inside the low-concentration region and an upwash inside the high-concentration region. Through the quadrant analysis, it is indicated that the vertical dust transport is closely related to the large-scale roll modes, and ejections in high-concentration regions are the major mechanisms for the upward motions of dust particles.
NASA Astrophysics Data System (ADS)
Madaria, Anuj R.; Kumar, Akshay; Zhou, Chongwu
2011-06-01
The application of silver nanowire films as transparent conductive electrodes has shown promising results recently. In this paper, we demonstrate the application of a simple spray coating technique to obtain large scale, highly uniform and conductive silver nanowire films on arbitrary substrates. We also integrated a polydimethylsiloxane (PDMS)-assisted contact transfer technique with spray coating, which allowed us to obtain large scale high quality patterned films of silver nanowires. The transparency and conductivity of the films was controlled by the volume of the dispersion used in spraying and the substrate area. We note that the optoelectrical property, σDC/σOp, for various films fabricated was in the range 75-350, which is extremely high for transparent thin film compared to other candidate alternatives to doped metal oxide film. Using this method, we obtain silver nanowire films on a flexible polyethylene terephthalate (PET) substrate with a transparency of 85% and sheet resistance of 33 Ω/sq, which is comparable to that of tin-doped indium oxide (ITO) on flexible substrates. In-depth analysis of the film shows a high performance using another commonly used figure-of-merit, ΦTE. Also, Ag nanowire film/PET shows good mechanical flexibility and the application of such a conductive silver nanowire film as an electrode in a touch panel has been demonstrated.
BioPig: a Hadoop-based analytic toolkit for large-scale sequence data.
Nordberg, Henrik; Bhatia, Karan; Wang, Kai; Wang, Zhong
2013-12-01
The recent revolution in sequencing technologies has led to an exponential growth of sequence data. As a result, most of the current bioinformatics tools become obsolete as they fail to scale with data. To tackle this 'data deluge', here we introduce the BioPig sequence analysis toolkit as one of the solutions that scale to data and computation. We built BioPig on the Apache's Hadoop MapReduce system and the Pig data flow language. Compared with traditional serial and MPI-based algorithms, BioPig has three major advantages: first, BioPig's programmability greatly reduces development time for parallel bioinformatics applications; second, testing BioPig with up to 500 Gb sequences demonstrates that it scales automatically with size of data; and finally, BioPig can be ported without modification on many Hadoop infrastructures, as tested with Magellan system at National Energy Research Scientific Computing Center and the Amazon Elastic Compute Cloud. In summary, BioPig represents a novel program framework with the potential to greatly accelerate data-intensive bioinformatics analysis.
A study of residence time distribution using radiotracer technique in the large scale plant facility
NASA Astrophysics Data System (ADS)
Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.
2017-06-01
As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
Scaling of the Urban Water Footprint: An Analysis of 65 Mid- to Large-Sized U.S. Metropolitan Areas
NASA Astrophysics Data System (ADS)
Mahjabin, T.; Garcia, S.; Grady, C.; Mejia, A.
2017-12-01
Scaling laws have been shown to be relevant to a range of disciplines including biology, ecology, hydrology, and physics, among others. Recently, scaling was shown to be important for understanding and characterizing cities. For instance, it was found that urban infrastructure (water supply pipes and electrical wires) tends to scale sublinearly with city population, implying that large cities are more efficient. In this study, we explore the scaling of the water footprint of cities. The water footprint is a measure of water appropriation that considers both the direct and indirect (virtual) water use of a consumer or producer. Here we compute the water footprint of 65 mid- to large-sized U.S. metropolitan areas, accounting for direct and indirect water uses associated with agricultural and industrial commodities, and residential and commercial water uses. We find that the urban water footprint, computed as the sum of the water footprint of consumption and production, exhibits sublinear scaling with an exponent of 0.89. This suggests the possibility of large cities being more water-efficient than small ones. To further assess this result, we conduct additional analysis by accounting for international flows, and the effects of green water and city boundary definition on the scaling. The analysis confirms the scaling and provides additional insight about its interpretation.
Large-scale environments of narrow-line Seyfert 1 galaxies
NASA Astrophysics Data System (ADS)
Järvelä, E.; Lähteenmäki, A.; Lietzen, H.; Poudel, A.; Heinämäki, P.; Einasto, M.
2017-09-01
Studying large-scale environments of narrow-line Seyfert 1 (NLS1) galaxies gives a new perspective on their properties, particularly their radio loudness. The large-scale environment is believed to have an impact on the evolution and intrinsic properties of galaxies, however, NLS1 sources have not been studied in this context before. We have a large and diverse sample of 1341 NLS1 galaxies and three separate environment data sets constructed using Sloan Digital Sky Survey. We use various statistical methods to investigate how the properties of NLS1 galaxies are connected to the large-scale environment, and compare the large-scale environments of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to study how they are related. NLS1 galaxies reside in less dense environments than any of the comparison samples, thus confirming their young age. The average large-scale environment density and environmental distribution of NLS1 sources is clearly different compared to BLS1 galaxies, thus it is improbable that they could be the parent population of NLS1 galaxies and unified by orientation. Within the NLS1 class there is a trend of increasing radio loudness with increasing large-scale environment density, indicating that the large-scale environment affects their intrinsic properties. Our results suggest that the NLS1 class of sources is not homogeneous, and furthermore, that a considerable fraction of them are misclassified. We further support a published proposal to replace the traditional classification to radio-loud, and radio-quiet or radio-silent sources with a division into jetted and non-jetted sources.
ERIC Educational Resources Information Center
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole
2016-01-01
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.
2012-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-located arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in lat/lon bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Hua, H.
2012-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method
NASA Astrophysics Data System (ADS)
Kuai, Ken Z.; Tsai, Christina W.
2012-02-01
SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.
Children's Understanding of Large-Scale Mapping Tasks: An Analysis of Talk, Drawings, and Gesture
ERIC Educational Resources Information Center
Kotsopoulos, Donna; Cordy, Michelle; Langemeyer, Melanie
2015-01-01
This research examined how children represent motion in large-scale mapping tasks that we referred to as "motion maps". The underlying mathematical content was transformational geometry. In total, 19 children, 8- to 10-year-old, created motion maps and captured their motion maps with accompanying verbal description digitally. Analysis of…
Large-Scale Aerosol Modeling and Analysis
2009-09-30
Modeling of Burning Emissions ( FLAMBE ) project, and other related parameters. Our plans to embed NAAPS inside NOGAPS may need to be put on hold...AOD, FLAMBE and FAROP at FNMOC are supported by 6.4 funding from PMW-120 for “Large-scale Atmospheric Models”, “Small-scale Atmospheric Models
Body size and extinction risk in terrestrial mammals above the species level.
Tomiya, Susumu
2013-12-01
Mammalian body mass strongly correlates with life history and population properties at the scale of mouse to elephant. Large body size is thus often associated with elevated extinction risk. I examined the North American fossil record (28-1 million years ago) of 276 terrestrial genera to uncover the relationship between body size and extinction probability above the species level. Phylogenetic comparative analysis revealed no correlation between sampling-adjusted durations and body masses ranging 7 orders of magnitude, an observation that was corroborated by survival analysis. Most of the ecological and temporal groups within the data set showed the same lack of relationship. Size-biased generic extinctions do not constitute a general feature of the Holarctic mammalian faunas in the Neogene. Rather, accelerated loss of large mammals occurred during intervals that experienced combinations of regional aridification and increased biomic heterogeneity within continents. The latter phenomenon is consistent with the macroecological prediction that large geographic ranges are critical to the survival of large mammals in evolutionary time. The frequent lack of size selectivity in generic extinctions can be reconciled with size-biased species loss if extinctions of large and small mammals at the species level are often driven by ecological perturbations of different spatial and temporal scales, while those at the genus level are more synchronized in time as a result of fundamental, multiscale environmental shifts.
Transportation and operations aspects of space energy systems
NASA Technical Reports Server (NTRS)
Woodcock, Gordon R.
1989-01-01
A brief comparative analysis was made for three concepts of supplying large-scale electrical energy to Earth from space. The concepts were: (1) mining helium-3 on the Moon and returning it to Earth; (2) constructing solar power satellites in geosynchronous orbit from lunar materials (the energy is beamed by microwave to receivers on Earth); and (3) constructing power collection and beaming systems on the Moon itself and transmitting the energy to Earth by microwave. This analysis concerned mainly space transportation and operations, but each of the systems is briefly characterized to provide a basis for space transportation and operations analysis.
Small-Scale Spectral and Color Analysis of Ritchey Crater Impact Materials
NASA Astrophysics Data System (ADS)
Bray, Veronica; Chojnacki, Matthew; McEwen, Alfred; Heyd, Rodney
2014-11-01
Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) analysis of Ritchey crater on Mars has allowed identification of the minerals uplifted from depth within its central peak as well as the dominant spectral signature of the crater fill materials which surround it. However, the 18m/px resolution of CRISM prevents full analysis of the nature of small-scale dykes, mega breccia blocks and finer scale crater-fill units. We extend our existing CRISM-based compositional mapping of the Ritchey crater interior to sub-CRISM pixel scales with the use of High Resolution Imaging Science Experiment (HiRISE) Color Ratio Products (CRPs). These CRPs are then compared to CRISM images; correlation between color ratio and CRISM spectral signature for a large bedrock unit is defined and used to suggest similar composition for a smaller unit with the same color ratio. Megabreccia deposits, angular fragments of rock in excess of 1 meter in diameter within a finer grained matrix, are common at Ritchey. The dominant spectral signature from each megabreccia unit varies with location around Ritchey and appears to reflect the matrix composition (based on texture and albedo similarities to surrounding rocks) rather than clast composition. In cases where the breccia block size is large enough for CRISM analysis, many different mineral compositions are noted (low calcium pyroxene (LCP) olivine (OL), alteration products) depending on the location. All block compositions (as inferred from CRPs) are observed down to the limit of HiRISE resolution. We have found a variety of dyke compositions within our mapping area. Correlation between CRP color and CRISM spectra in this area suggest that large 10 m wide) dykes within LCP-bearing bedrock close to the crater center tend to have similar composition to the host rock. Smaller dykes running non-parallel to the larger dykes are inferred to be OL-rich suggesting multiple phases of dyke formation within the Ritchey crater and its bedrock.
NASA Astrophysics Data System (ADS)
Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.
2009-12-01
Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).
Large Scale Metal Additive Techniques Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W
2016-01-01
In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environmentmore » friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.« less
Lifetime evaluation of large format CMOS mixed signal infrared devices
NASA Astrophysics Data System (ADS)
Linder, A.; Glines, Eddie
2015-09-01
New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.
FGWAS: Functional genome wide association analysis.
Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-10-01
Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.
Nonlinear modulation of the HI power spectrum on ultra-large scales. I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umeh, Obinna; Maartens, Roy; Santos, Mario, E-mail: umeobinna@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: mgrsantos@uwc.ac.za
2016-03-01
Intensity mapping of the neutral hydrogen brightness temperature promises to provide a three-dimensional view of the universe on very large scales. Nonlinear effects are typically thought to alter only the small-scale power, but we show how they may bias the extraction of cosmological information contained in the power spectrum on ultra-large scales. For linear perturbations to remain valid on large scales, we need to renormalize perturbations at higher order. In the case of intensity mapping, the second-order contribution to clustering from weak lensing dominates the nonlinear contribution at high redshift. Renormalization modifies the mean brightness temperature and therefore the evolutionmore » bias. It also introduces a term that mimics white noise. These effects may influence forecasting analysis on ultra-large scales.« less
Analysis of blood-based gene expression in idiopathic Parkinson disease.
Shamir, Ron; Klein, Christine; Amar, David; Vollstedt, Eva-Juliane; Bonin, Michael; Usenovic, Marija; Wong, Yvette C; Maver, Ales; Poths, Sven; Safer, Hershel; Corvol, Jean-Christophe; Lesage, Suzanne; Lavi, Ofer; Deuschl, Günther; Kuhlenbaeumer, Gregor; Pawlack, Heike; Ulitsky, Igor; Kasten, Meike; Riess, Olaf; Brice, Alexis; Peterlin, Borut; Krainc, Dimitri
2017-10-17
To examine whether gene expression analysis of a large-scale Parkinson disease (PD) patient cohort produces a robust blood-based PD gene signature compared to previous studies that have used relatively small cohorts (≤220 samples). Whole-blood gene expression profiles were collected from a total of 523 individuals. After preprocessing, the data contained 486 gene profiles (n = 205 PD, n = 233 controls, n = 48 other neurodegenerative diseases) that were partitioned into training, validation, and independent test cohorts to identify and validate a gene signature. Batch-effect reduction and cross-validation were performed to ensure signature reliability. Finally, functional and pathway enrichment analyses were applied to the signature to identify PD-associated gene networks. A gene signature of 100 probes that mapped to 87 genes, corresponding to 64 upregulated and 23 downregulated genes differentiating between patients with idiopathic PD and controls, was identified with the training cohort and successfully replicated in both an independent validation cohort (area under the curve [AUC] = 0.79, p = 7.13E-6) and a subsequent independent test cohort (AUC = 0.74, p = 4.2E-4). Network analysis of the signature revealed gene enrichment in pathways, including metabolism, oxidation, and ubiquitination/proteasomal activity, and misregulation of mitochondria-localized genes, including downregulation of COX4I1 , ATP5A1 , and VDAC3 . We present a large-scale study of PD gene expression profiling. This work identifies a reliable blood-based PD signature and highlights the importance of large-scale patient cohorts in developing potential PD biomarkers. © 2017 American Academy of Neurology.
A comparative analysis of rawinsonde and NIMBUS 6 and TIROS N satellite profile data
NASA Technical Reports Server (NTRS)
Scoggins, J. R.; Carle, W. E.; Knight, K.; Moyer, V.; Cheng, N. M.
1981-01-01
Comparisons are made between rawinsonde and satellite profiles in seven areas for a wide range of surface and weather conditions. Variables considered include temperature, dewpoint temperature, thickness, precipitable water, lapse rate of temperature, stability, geopotential height, mixing ratio, wind direction, wind speed, and kinematic parameters, including vorticity and the advection of vorticity and temperature. In addition, comparisons are made in the form of cross sections and synoptic fields for selected variables. Sounding data from the NIMBUS 6 and TIROS N satellites were used. Geostrophic wind computed from smoothed geopotential heights provided large scale flow patterns that agreed well with the rawinsonde wind fields. Surface wind patterns as well as magnitudes computed by use of the log law to extrapolate wind to a height of 10 m agreed with observations. Results of this study demonstrate rather conclusively that satellite profile data can be used to determine characteristics of large scale systems but that small scale features, such as frontal zones, cannot yet be resolved.
NASA Astrophysics Data System (ADS)
Berg, M.; Wilcox, B. P.; Angerer, J.; Marcantonio, F.; Fox, W.; Popescu, S. C.
2013-12-01
Long-running abandonment of marginal croplands and woody plant encroachment have been observed in many landscapes around world, often in association with one another. However, there is great uncertainty about the consequences of these trends, and very few studies have examined impacts at the watershed scale. In watersheds totaling 230km2 in Texas, we used an integrated approach of sediment chronosequencing, historical imagery analysis, and streamflow analysis to describe landscape dynamics and investigate the large-scale effects of changing land use and land cover. The picture is quite complex. Instead of uniform woody plant encroachment, shrubs have undergone marked decrease in some areas through management efforts. As a result, woody plants have experienced up to a 100% increase in one watershed compared with a 65% decline in another. This accompanies a nearly 85% abandonment of cropland across the area over the last 75 years. While streamflow appears primarily to remain driven by rainfall events, erosion and sedimentation of downstream reservoirs have great implications for water resources. Radioisotope sediment tracers indicate a doubling in sediment yield in certain watersheds while others have displayed a near halt in sediment production. These are largely tied to the dynamic relationship between herbaceous, bare ground, and woody plant cover in different watersheds as well as the proliferation of constructed small ponds, which have increased in number up to 700%. Understanding the dynamics of water and sediment yield through this approach may play a major role in informing rangeland and water resource management at large scales.
NASA Astrophysics Data System (ADS)
Cozzoli, Francesco; Smolders, Sven; Eelkema, Menno; Ysebaert, Tom; Escaravage, Vincent; Temmerman, Stijn; Meire, Patrick; Herman, Peter M. J.; Bouma, Tjeerd J.
2017-01-01
The natural coastal hydrodynamics and morphology worldwide is altered by human interventions such as embankments, shipping and dredging, which may have consequences for ecosystem functionality. To ensure long-term ecological sustainability, requires capability to predict long-term large-scale ecological effects of altered hydromorphology. As empirical data sets at relevant scales are missing, there is need for integrating ecological modeling with physical modeling. This paper presents a case study showing the long-term, large-scale macrozoobenthic community response to two contrasting human alterations of the hydromorphological habitat: deepening of estuarine channels to enhance navigability (Westerschelde) vs. realization of a storm surge barrier to enhance coastal safety (Oosterschelde). A multidisciplinary integration of empirical data and modeling of estuarine morphology, hydrodynamics and benthic ecology was used to reconstruct the hydrological evolution and resulting long-term (50 years) large-scale ecological trends for both estuaries over the last. Our model indicated that hydrodynamic alterations following the deepening of the Westerschelde had negative implications for benthic life, while the realization of the Oosterschelde storm surge barriers had mixed and habitat-dependent responses, that also include unexpected improvement of environmental quality. Our analysis illustrates long-term trends in the natural community caused by opposing management strategies. The divergent human pressures on the Oosterschelde and Westerschelde are examples of what could happen in a near future for many global coastal ecosystems. The comparative analysis of the two basins is a valuable source of information to understand (and communicate) the future ecological consequences of human coastal development.
NASA Technical Reports Server (NTRS)
Furlong, G Chester; Mchugh, James G
1957-01-01
An analysis of the longitudinal characteristics of swept wings which is based on available large-scale low-speed data and supplemented with low-scale data when feasible is presented. The emphasis has been placed on the differentiation of the characteristics by a differentiation between the basic flow phenomenon involved. Insofar as possible all large-scale data available as of August 15, 1951 have been summarized in tabular form for ready reference.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.
Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A
2017-03-01
Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.
Yap, Kien-Pong; Ho, Wing S; Gan, Han M; Chai, Lay C; Thong, Kwai L
2016-01-01
Typhoid fever, caused by Salmonella enterica serovar Typhi, remains an important public health burden in Southeast Asia and other endemic countries. Various genotyping methods have been applied to study the genetic variations of this human-restricted pathogen. Multilocus sequence typing (MLST) is one of the widely accepted methods, and recently, there is a growing interest in the re-application of MLST in the post-genomic era. In this study, we provide the global MLST distribution of S. Typhi utilizing both publicly available 1,826 S. Typhi genome sequences in addition to performing conventional MLST on S. Typhi strains isolated from various endemic regions spanning over a century. Our global MLST analysis confirms the predominance of two sequence types (ST1 and ST2) co-existing in the endemic regions. Interestingly, S. Typhi strains with ST8 are currently confined within the African continent. Comparative genomic analyses of ST8 and other rare STs with genomes of ST1/ST2 revealed unique mutations in important virulence genes such as flhB, sipC, and tviD that may explain the variations that differentiate between seemingly successful (widespread) and unsuccessful (poor dissemination) S. Typhi populations. Large scale whole-genome phylogeny demonstrated evidence of phylogeographical structuring and showed that ST8 may have diverged from the earlier ancestral population of ST1 and ST2, which later lost some of its fitness advantages, leading to poor worldwide dissemination. In response to the unprecedented increase in genomic data, this study demonstrates and highlights the utility of large-scale genome-based MLST as a quick and effective approach to narrow the scope of in-depth comparative genomic analysis and consequently provide new insights into the fine scale of pathogen evolution and population structure.
Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi
2014-07-01
Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Scaling up to address data science challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, Joanne R.
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Scaling up to address data science challenges
Wendelberger, Joanne R.
2017-04-27
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
ERIC Educational Resources Information Center
BIVONA, WILLIAM A.
THIS REPORT PRESENTS AN ANALYSIS OF OVER EIGHTEEN SMALL, INTERMEDIATE, AND LARGE SCALE SYSTEMS FOR THE SELECTIVE DISSEMINATION OF INFORMATION (SDI). SYSTEMS ARE COMPARED AND ANALYZED WITH RESPECT TO DESIGN CRITERIA AND THE FOLLOWING NINE SYSTEM PARAMETERS--(1) INFORMATION INPUT, (2) METHODS OF INDEXING AND ABSTRACTING, (3) USER INTEREST PROFILE…
NASA Astrophysics Data System (ADS)
Irshad, Humayun; Oh, Eun-Yeong; Schmolze, Daniel; Quintana, Liza M.; Collins, Laura; Tamimi, Rulla M.; Beck, Andrew H.
2017-02-01
The assessment of protein expression in immunohistochemistry (IHC) images provides important diagnostic, prognostic and predictive information for guiding cancer diagnosis and therapy. Manual scoring of IHC images represents a logistical challenge, as the process is labor intensive and time consuming. Since the last decade, computational methods have been developed to enable the application of quantitative methods for the analysis and interpretation of protein expression in IHC images. These methods have not yet replaced manual scoring for the assessment of IHC in the majority of diagnostic laboratories and in many large-scale research studies. An alternative approach is crowdsourcing the quantification of IHC images to an undefined crowd. The aim of this study is to quantify IHC images for labeling of ER status with two different crowdsourcing approaches, image-labeling and nuclei-labeling, and compare their performance with automated methods. Crowdsourcing- derived scores obtained greater concordance with the pathologist interpretations for both image-labeling and nuclei-labeling tasks (83% and 87%), as compared to the pathologist concordance achieved by the automated method (81%) on 5,338 TMA images from 1,853 breast cancer patients. This analysis shows that crowdsourcing the scoring of protein expression in IHC images is a promising new approach for large scale cancer molecular pathology studies.
NASA Astrophysics Data System (ADS)
Tang, Shuaiqi; Zhang, Minghua
2015-08-01
Atmospheric vertical velocities and advective tendencies are essential large-scale forcing data to drive single-column models (SCMs), cloud-resolving models (CRMs), and large-eddy simulations (LESs). However, they cannot be directly measured from field measurements or easily calculated with great accuracy. In the Atmospheric Radiation Measurement Program (ARM), a constrained variational algorithm (1-D constrained variational analysis (1DCVA)) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). The 1DCVA algorithm is now extended into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data, diabatic heating sources (Q1), and moisture sinks (Q2). Results are presented for a midlatitude cyclone case study on 3 March 2000 at the ARM Southern Great Plains site. These results are used to evaluate the diabatic heating fields in the available products such as Rapid Update Cycle, ERA-Interim, National Centers for Environmental Prediction Climate Forecast System Reanalysis, Modern-Era Retrospective Analysis for Research and Applications, Japanese 55-year Reanalysis, and North American Regional Reanalysis. We show that although the analysis/reanalysis generally captures the atmospheric state of the cyclone, their biases in the derivative terms (Q1 and Q2) at regional scale of a few hundred kilometers are large and all analyses/reanalyses tend to underestimate the subgrid-scale upward transport of moist static energy in the lower troposphere. The 3DCVA-gridded large-scale forcing data are physically consistent with the spatial distribution of surface and TOA measurements of radiation, precipitation, latent and sensible heat fluxes, and clouds that are better suited to force SCMs, CRMs, and LESs. Possible applications of the 3DCVA are discussed.
N-point statistics of large-scale structure in the Zel'dovich approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
2014-06-01
Motivated by the results presented in a companion paper, here we give a simple analytical expression for the matter n-point functions in the Zel'dovich approximation (ZA) both in real and in redshift space (including the angular case). We present numerical results for the 2-dimensional redshift-space correlation function, as well as for the equilateral configuration for the real-space 3-point function. We compare those to the tree-level results. Our analysis is easily extendable to include Lagrangian bias, as well as higher-order perturbative corrections to the ZA. The results should be especially useful for modelling probes of large-scale structure in the linear regime,more » such as the Baryon Acoustic Oscillations. We make the numerical code used in this paper freely available.« less
Planck intermediate results: XLII. Large-scale Galactic magnetic fields
Adam, R.; Ade, P. A. R.; Alves, M. I. R.; ...
2016-12-12
Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. In this paper, we use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering inmore » the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Finally, though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.« less
He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z
2013-12-04
Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.
He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
ERIC Educational Resources Information Center
Fagginger Auer, Marije F.; Hickendorff, Marian; Van Putten, Cornelis M.; Béguin, Anton A.; Heiser, Willem J.
2016-01-01
A first application of multilevel latent class analysis (MLCA) to educational large-scale assessment data is demonstrated. This statistical technique addresses several of the challenges that assessment data offers. Importantly, MLCA allows modeling of the often ignored teacher effects and of the joint influence of teacher and student variables.…
The cosmological principle is not in the sky
NASA Astrophysics Data System (ADS)
Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan
2017-08-01
The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
The use of smoke acid as an alternative coagulating agent for natural rubber sheets' production.
Ferreira, Vanda S; Rêgo, Ione N C; Pastore, Floriano; Mandai, Mariana M; Mendes, Leonardo S; Santos, Karin A M; Rubim, Joel C; Suarez, Paulo A Z
2005-03-01
A comparative study of rubber sheets obtained using formic, acetic, and smoke acid as coagulants is shown for latex obtained from native Amazonian trees and also from commercial cultivated trees. The evaluation of both processes of coagulation was carried out by spectroscopic and physical-chemical analysis, showing no differences in the rubber sheets obtained. This new method of rubber sheet preparation was introduced into Amazonian rainforest rubber tapper communities, which are actually producing in large scale. The physical-mechanical properties were similar among a large sheets made by different rubber tapper communities using this new method.
Tools for understanding landscapes: combining large-scale surveys to characterize change. Chapter 9.
W. Keith Moser; Janine Bolliger; Don C. Bragg; Mark H. Hansen; Mark A. Hatfield; Timothy A. Nigh; Lisa A. Schulte
2008-01-01
All landscapes change continuously. Since change is perceived and interpreted through measures of scale, any quantitative analysis of landscapes must identify and describe the spatiotemporal mosaics shaped by large-scale structures and processes. This process is controlled by core influences, or "drivers," that shape the change and affect the outcome...
SINGH, G. D.; McNAMARA JR, J. A.; LOZANOFF, S.
1997-01-01
This study determines deformations of the midface that contribute to a class III appearance, employing thin-plate spline analysis. A total of 135 lateral cephalographs of prepubertal children of European-American descent with either class III malocclusions or a class I molar occlusion were compared. The cephalographs were traced and checked, and 7 homologous landmarks of the midface were identified and digitised. The data sets were scaled to an equivalent size and subjected to Procrustes analysis. These statistical tests indicated significant differences (P<0.05) between the averaged class I and class III morphologies. Thin-plate spline analysis indicated that both affine and nonaffine transformations contribute towards the total spline for the averaged midfacial configuration. For nonaffine transformations, partial warp 3 had the highest magnitude, indicating the large scale deformations of the midfacial configuration. These deformations affected the palatal landmarks, and were associated with compression of the midfacial complex in the anteroposterior plane predominantly. Partial warp 4 produced some vertical compression of the posterior aspect of the midfacial complex whereas partial warps 1 and 2 indicated localised shape changes of the maxillary alveolus region. Large spatial-scale deformations therefore affect the midfacial complex in an anteroposterior axis, in combination with vertical compression and localised distortions. These deformations may represent a developmental diminution of the palatal complex anteroposteriorly that, allied with vertical shortening of midfacial height posteriorly, results in class III malocclusions with a retrusive midfacial profile. PMID:9449078
Singh, G D; McNamara, J A; Lozanoff, S
1997-11-01
This study determines deformations of the midface that contribute to a class III appearance, employing thinplate spline analysis. A total of 135 lateral cephalographs of prepubertal children of European-American descent with either class III malocclusions or a class I molar occlusion were compared. The cephalographs were traced and checked, and 7 homologous landmarks of the midface were identified and digitised. The data sets were scaled to an equivalent size and subjected to Procrustes analysis. These statistical tests indicated significant differences (P < 0.05) between the averaged class I and class III morphologies. Thinplate spline analysis indicated that both affine and nonaffine transformations contribute towards the total spline for the averaged midfacial configuration. For nonaffine transformations, partial warp 3 had the highest magnitude, indicating the large scale deformations of the midfacial configuration. These deformations affected the palatal landmarks, and were associated with compression of the midfacial complex in the anteroposterior plane predominantly. Partial warp 4 produced some vertical compression of the posterior aspect of the midfacial complex whereas partial warps 1 and 2 indicated localised shape changes of the maxillary alveolus region. large spatial-scale deformations therefore affect the midfacial complex in an anteroposterior axis, in combination with vertical compression and localised distortions. These deformations may represent a developmental diminution of the palatal complex anteroposteriorly that, allied with vertical shortening of midfacial height posteriorly, results in class III malocclusions with a retrusive midfacial profile.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Marquez, Andres; Choudhury, Sutanay
2012-09-01
Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less
Spline analysis of the mandible in human subjects with class III malocclusion.
Singh, G D; McNamara, J A; Lozanoff, S
1997-05-01
This study determines deformations that contribute to a Class III mandibular morphology, employing thin-plate spline (TPS) analysis. A total of 133 lateral cephalographs of prepubertal children of European-American descent with either a Class I molar occlusion or a Class III malocclusion were compared. The cephalographs were traced and checked, and eight homologous landmarks on the mandible were identified and digitized. The datasets were scaled to an equivalent size and subjected to statistical analyses. These tests indicated significant differences between average Class I and Class III mandibular morphologies. When the sample was subdivided into seven age and sex-matched groups statistical differences were maintained for each group. TPS analysis indicated that both affine (uniform) and non-affine transformations contribute towards the total spline, and towards the average mandibular morphology at each age group. For non-affine transformations, partial warp 5 had the highest magnitude, indicating large-scale deformations of the mandibular configuration between articulare and pogonion. In contrast, partial warp 1 indicated localized shape changes in the mandibular symphyseal region. It is concluded that large spatial-scale deformations affect the body of the mandible, in combination with localized distortions further anteriorly. These deformations may represent a developmental elongation of the mandibular corpus antero-posteriorly that, allied with symphyseal changes, leads to the appearance of a Class III prognathic mandibular profile.
Large Scale EOF Analysis of Climate Data
NASA Astrophysics Data System (ADS)
Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.
2016-12-01
We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.
Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM
NASA Astrophysics Data System (ADS)
Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak
2015-04-01
Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.
Large Scale Processes and Extreme Floods in Brazil
NASA Astrophysics Data System (ADS)
Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.
2016-12-01
Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David
2015-04-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).
Lix, Lisa M; Wu, Xiuyun; Hopman, Wilma; Mayo, Nancy; Sajobi, Tolulope T; Liu, Juxin; Prior, Jerilynn C; Papaioannou, Alexandra; Josse, Robert G; Towheed, Tanveer E; Davison, K Shawn; Sawatzky, Richard
2016-01-01
Self-reported health status measures, like the Short Form 36-item Health Survey (SF-36), can provide rich information about the overall health of a population and its components, such as physical, mental, and social health. However, differential item functioning (DIF), which arises when population sub-groups with the same underlying (i.e., latent) level of health have different measured item response probabilities, may compromise the comparability of these measures. The purpose of this study was to test for DIF on the SF-36 physical functioning (PF) and mental health (MH) sub-scale items in a Canadian population-based sample. Study data were from the prospective Canadian Multicentre Osteoporosis Study (CaMos), which collected baseline data in 1996-1997. DIF was tested using a multiple indicators multiple causes (MIMIC) method. Confirmatory factor analysis defined the latent variable measurement model for the item responses and latent variable regression with demographic and health status covariates (i.e., sex, age group, body weight, self-perceived general health) produced estimates of the magnitude of DIF effects. The CaMos cohort consisted of 9423 respondents; 69.4% were female and 51.7% were less than 65 years. Eight of 10 items on the PF sub-scale and four of five items on the MH sub-scale exhibited DIF. Large DIF effects were observed on PF sub-scale items about vigorous and moderate activities, lifting and carrying groceries, walking one block, and bathing or dressing. On the MH sub-scale items, all DIF effects were small or moderate in size. SF-36 PF and MH sub-scale scores were not comparable across population sub-groups defined by demographic and health status variables due to the effects of DIF, although the magnitude of this bias was not large for most items. We recommend testing and adjusting for DIF to ensure comparability of the SF-36 in population-based investigations.
Large-angle correlations in the cosmic microwave background
NASA Astrophysics Data System (ADS)
Efstathiou, George; Ma, Yin-Zhe; Hanson, Duncan
2010-10-01
It has been argued recently by Copi et al. 2009 that the lack of large angular correlations of the CMB temperature field provides strong evidence against the standard, statistically isotropic, inflationary Lambda cold dark matter (ΛCDM) cosmology. We compare various estimators of the temperature correlation function showing how they depend on assumptions of statistical isotropy and how they perform on the Wilkinson Microwave Anisotropy Probe (WMAP) 5-yr Internal Linear Combination (ILC) maps with and without a sky cut. We show that the low multipole harmonics that determine the large-scale features of the temperature correlation function can be reconstructed accurately from the data that lie outside the sky cuts. The reconstructions are only weakly dependent on the assumed statistical properties of the temperature field. The temperature correlation functions computed from these reconstructions are in good agreement with those computed from the ILC map over the whole sky. We conclude that the large-scale angular correlation function for our realization of the sky is well determined. A Bayesian analysis of the large-scale correlations is presented, which shows that the data cannot exclude the standard ΛCDM model. We discuss the differences between our results and those of Copi et al. Either there exists a violation of statistical isotropy as claimed by Copi et al., or these authors have overestimated the significance of the discrepancy because of a posteriori choices of estimator, statistic and sky cut.
Nowrousian, Minou; Würtz, Christian; Pöggeler, Stefanie; Kück, Ulrich
2004-03-01
One of the most challenging parts of large scale sequencing projects is the identification of functional elements encoded in a genome. Recently, studies of genomes of up to six different Saccharomyces species have demonstrated that a comparative analysis of genome sequences from closely related species is a powerful approach to identify open reading frames and other functional regions within genomes [Science 301 (2003) 71, Nature 423 (2003) 241]. Here, we present a comparison of selected sequences from Sordaria macrospora to their corresponding Neurospora crassa orthologous regions. Our analysis indicates that due to the high degree of sequence similarity and conservation of overall genomic organization, S. macrospora sequence information can be used to simplify the annotation of the N. crassa genome.
Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.
Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner
2016-01-01
Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Glover, David M.; Doney, Scott C.; Oestreich, William K.; Tullo, Alisdair W.
2018-01-01
Mesoscale (10-300 km, weeks to months) physical variability strongly modulates the structure and dynamics of planktonic marine ecosystems via both turbulent advection and environmental impacts upon biological rates. Using structure function analysis (geostatistics), we quantify the mesoscale biological signals within global 13 year SeaWiFS (1998-2010) and 8 year MODIS/Aqua (2003-2010) chlorophyll a ocean color data (Level-3, 9 km resolution). We present geographical distributions, seasonality, and interannual variability of key geostatistical parameters: unresolved variability or noise, resolved variability, and spatial range. Resolved variability is nearly identical for both instruments, indicating that geostatistical techniques isolate a robust measure of biophysical mesoscale variability largely independent of measurement platform. In contrast, unresolved variability in MODIS/Aqua is substantially lower than in SeaWiFS, especially in oligotrophic waters where previous analysis identified a problem for the SeaWiFS instrument likely due to sensor noise characteristics. Both records exhibit a statistically significant relationship between resolved mesoscale variability and the low-pass filtered chlorophyll field horizontal gradient magnitude, consistent with physical stirring acting on large-scale gradient as an important factor supporting observed mesoscale variability. Comparable horizontal length scales for variability are found from tracer-based scaling arguments and geostatistical decorrelation. Regional variations between these length scales may reflect scale dependence of biological mechanisms that also create variability directly at the mesoscale, for example, enhanced net phytoplankton growth in coastal and frontal upwelling and convective mixing regions. Global estimates of mesoscale biophysical variability provide an improved basis for evaluating higher resolution, coupled ecosystem-ocean general circulation models, and data assimilation.
MJO: Asymptotically-Nondivergent Nonlinear Wave?: A Review
NASA Astrophysics Data System (ADS)
Yano, J. I.
2014-12-01
MJO is often considered a convectively-coupled wave. The present talk is going to argue that it is best understood primarily as a nonlinear solitary wave dominated by vorticity. Role of convection is secondary,though likely catalytic. According to Charney's (1963) scale analysis, the large-scale tropical circulations are nondivergent to the leading order, i.e., dominated by rotational flows. Yano et al (2009) demonstrate indeed that is the case for a period dominated by three MJO events. The scale analysis of Yano and Bonazzola (2009, JAS) demonstrates such an asymptotically nondivergent regime is a viable alternative to the traditionally-believed equatorial-wave regime. Wedi and Smolarkiewicz (2010, JAS) in turn, show by numerical computations of a dry system that a MJO-like oscillation for a similar period can indeed be generated by free solitary nonlinear equatorial Rossby-wave dynamicswithout any convective forcing to a system. Unfortunately, this perspective is slow to be accepted with people's mind so much fixed on the role of convection. This situation may be compared to a slow historical process of acceptance of Eady and Charney's baroclinicinstability simply because it does not invoke any convection Ironically, once the nonlinear free-wave view for MJO is accepted, interpretations can more easily be developed for a recent series of numerical model experiments under a global channel configuration overthe tropics with a high-resolution of 5-50 km with or without convection parameterization. All those experiments tend to reproduce observed large-scale circulations associated with MJO rather well, though most of time, they fail to reproduce convective coherency associated with MJO.These large-scale circulations appear to be generated by lateral forcing imposed at the latitudinal walls. These lateral boundaries are reasonably far enough (30NS) to induce any direct influence to the tropics. There is no linear dry equatorial wave that supports this period either. In Wedi and Smolarkiewicz's analysis, such a lateral forcing is essential in order to obtain their nonlinear solitary wave solution. Thus is the leading-order solution for MJO in the same sense as the linear baroclinic instability is a leading-order solution to the midlatitude synoptic-scale storm.
Bi-Force: large-scale bicluster editing and its application to gene expression data biclustering
Sun, Peng; Speicher, Nora K.; Röttger, Richard; Guo, Jiong; Baumbach, Jan
2014-01-01
Abstract The explosion of the biological data has dramatically reformed today's biological research. The need to integrate and analyze high-dimensional biological data on a large scale is driving the development of novel bioinformatics approaches. Biclustering, also known as ‘simultaneous clustering’ or ‘co-clustering’, has been successfully utilized to discover local patterns in gene expression data and similar biomedical data types. Here, we contribute a new heuristic: ‘Bi-Force’. It is based on the weighted bicluster editing model, to perform biclustering on arbitrary sets of biological entities, given any kind of pairwise similarities. We first evaluated the power of Bi-Force to solve dedicated bicluster editing problems by comparing Bi-Force with two existing algorithms in the BiCluE software package. We then followed a biclustering evaluation protocol in a recent review paper from Eren et al. (2013) (A comparative analysis of biclustering algorithms for gene expressiondata. Brief. Bioinform., 14:279–292.) and compared Bi-Force against eight existing tools: FABIA, QUBIC, Cheng and Church, Plaid, BiMax, Spectral, xMOTIFs and ISA. To this end, a suite of synthetic datasets as well as nine large gene expression datasets from Gene Expression Omnibus were analyzed. All resulting biclusters were subsequently investigated by Gene Ontology enrichment analysis to evaluate their biological relevance. The distinct theoretical foundation of Bi-Force (bicluster editing) is more powerful than strict biclustering. We thus outperformed existing tools with Bi-Force at least when following the evaluation protocols from Eren et al. Bi-Force is implemented in Java and integrated into the open source software package of BiCluE. The software as well as all used datasets are publicly available at http://biclue.mpi-inf.mpg.de. PMID:24682815
Galaxy two-point covariance matrix estimation for next generation surveys
NASA Astrophysics Data System (ADS)
Howlett, Cullan; Percival, Will J.
2017-12-01
We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.
ERIC Educational Resources Information Center
Peurach, Donald J.; Lenhoff, Sarah Winchell; Glazer, Joshua L.
2016-01-01
Recognizing school improvement networks as a leading strategy for large-scale high school reform, this analysis examines developmental evaluation as an approach to examining school improvement networks as "learning systems" able to produce, use, and refine practical knowledge in large numbers of schools. Through a case study of one…
Rieucau, G; Kiszka, J J; Castillo, J C; Mourier, J; Boswell, K M; Heithaus, M R
2018-06-01
A novel image analysis-based technique applied to unmanned aerial vehicle (UAV) survey data is described to detect and locate individual free-ranging sharks within aggregations. The method allows rapid collection of data and quantification of fine-scale swimming and collective patterns of sharks. We demonstrate the usefulness of this technique in a small-scale case study exploring the shoaling tendencies of blacktip reef sharks Carcharhinus melanopterus in a large lagoon within Moorea, French Polynesia. Using our approach, we found that C. melanopterus displayed increased alignment with shoal companions when distributed over a sandflat where they are regularly fed for ecotourism purposes as compared with when they shoaled in a deeper adjacent channel. Our case study highlights the potential of a relatively low-cost method that combines UAV survey data and image analysis to detect differences in shoaling patterns of free-ranging sharks in shallow habitats. This approach offers an alternative to current techniques commonly used in controlled settings that require time-consuming post-processing effort. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Wyllie, Jessica; Lucas, Benjamin; Carlson, Jamie; Kitchens, Brent; Kozary, Ben; Zaki, Mohamed
2016-01-01
Using a small-scale descriptive network analysis approach, this study highlights the importance of stakeholder networks for identifying valuable stakeholders and the management of existing stakeholders in the context of mental health not-for-profit services. We extract network data from the social media brand pages of three health service organizations from the U.S., U.K., and Australia, to visually map networks of 579 social media brand pages (represented by nodes), connected by 5,600 edges. This network data is analyzed using a collection of popular graph analysis techniques to assess the differences in the way each of the service organizations manage stakeholder networks. We also compare node meta-information against basic topology measures to emphasize the importance of effectively managing relationships with stakeholders who have large external audiences. Implications and future research directions are also discussed.
Carlson, Jamie; Kitchens, Brent; Kozary, Ben; Zaki, Mohamed
2016-01-01
Using a small-scale descriptive network analysis approach, this study highlights the importance of stakeholder networks for identifying valuable stakeholders and the management of existing stakeholders in the context of mental health not-for-profit services. We extract network data from the social media brand pages of three health service organizations from the U.S., U.K., and Australia, to visually map networks of 579 social media brand pages (represented by nodes), connected by 5,600 edges. This network data is analyzed using a collection of popular graph analysis techniques to assess the differences in the way each of the service organizations manage stakeholder networks. We also compare node meta-information against basic topology measures to emphasize the importance of effectively managing relationships with stakeholders who have large external audiences. Implications and future research directions are also discussed. PMID:27711236
NASA Astrophysics Data System (ADS)
Romo, Cynthia Paulinne
High speed digital video images of encased and uncased large-scale explosions of Ammonium Nitrate Fuel Oil (ANFO), and Composition C-4 (C-4) at different masses were analyzed using the background oriented schlieren visualization technique. The encased explosions for ANFO and C-4 took place in the form of car bombs and pipe bombs respectively. The data obtained from the video footage were used to produce shock wave radius vs time profiles, as well as Mach number vs shock wave position profiles. The experimentally measured shock wave data for each explosive material were scaled using Sachs' scaling laws to a 1 kilogram charge at normal temperature and pressure. The results of C-4 were compared to literature, while the results of scaled ANFO were compared to each other, and to the results obtained during the uncased detonations. The comparison between the scaled profiles gathered from the encased and uncased detonations resulted in the identification of the relative amount of energy lost due to the fragmentation of the case. The C-4 profiles were compared to those obtained from computational simulations performed via CTH. The C-4 results showed an agreement in the data reported in literature and that obtained using the background-oriented schlieren (BOS) technique, as well as a good overall agreement with the profiles obtained computationally.
A priori analysis of differential diffusion for model development for scale-resolving simulations
NASA Astrophysics Data System (ADS)
Hunger, Franziska; Dietzsch, Felix; Gauding, Michael; Hasse, Christian
2018-01-01
The present study analyzes differential diffusion and the mechanisms responsible for it with regard to the turbulent/nonturbulent interface (TNTI) with special focus on model development for scale-resolving simulations. In order to analyze differences between resolved and subfilter phenomena, direct numerical simulation (DNS) data are compared with explicitly filtered data. The DNS database stems from a temporally evolving turbulent plane jet transporting two passive scalars with Schmidt numbers of unity and 0.25 presented by Hunger et al. [F. Hunger et al., J. Fluid Mech. 802, R5 (2016), 10.1017/jfm.2016.471]. The objective of this research is twofold: (i) to compare the position of the turbulent-nonturbulent interface between the original DNS data and the filtered data and (ii) to analyze differential diffusion and the impact of the TNTI with regard to scale resolution in the filtered DNS data. For the latter, differential diffusion quantities are studied, clearly showing the decrease of differential diffusion at the resolved scales with increasing filter width. A transport equation for the scalar differences is evaluated. Finally, the existence of large scalar gradients, gradient alignment, and the diffusive fluxes being the physical mechanisms responsible for the separation of the two scalars are compared between the resolved and subfilter scales.
Global maps of the magnetic thickness and magnetization of the Earth's lithosphere
NASA Astrophysics Data System (ADS)
Vervelidou, Foteini; Thébault, Erwan
2015-10-01
We have constructed global maps of the large-scale magnetic thickness and magnetization of Earth's lithosphere. Deriving such large-scale maps based on lithospheric magnetic field measurements faces the challenge of the masking effect of the core field. In this study, the maps were obtained through analyses in the spectral domain by means of a new regional spatial power spectrum based on the Revised Spherical Cap Harmonic Analysis (R-SCHA) formalism. A series of regional spectral analyses were conducted covering the entire Earth. The R-SCHA surface power spectrum for each region was estimated using the NGDC-720 spherical harmonic (SH) model of the lithospheric magnetic field, which is based on satellite, aeromagnetic, and marine measurements. These observational regional spectra were fitted to a recently proposed statistical expression of the power spectrum of Earth's lithospheric magnetic field, whose free parameters include the thickness and magnetization of the magnetic sources. The resulting global magnetic thickness map is compared to other crustal and magnetic thickness maps based upon different geophysical data. We conclude that the large-scale magnetic thickness of the lithosphere is on average confined to a layer that does not exceed the Moho.
Applications of species accumulation curves in large-scale biological data analysis.
Deng, Chao; Daley, Timothy; Smith, Andrew D
2015-09-01
The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.
Applications of species accumulation curves in large-scale biological data analysis
Deng, Chao; Daley, Timothy; Smith, Andrew D
2016-01-01
The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899
A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis
NASA Astrophysics Data System (ADS)
Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.
2006-12-01
Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.
Thomson, William Murray; Malden, Penelope Elizabeth
2011-09-01
To examine the properties, validity and responsiveness of the Family Impact Scale in a consecutive clinical sample of patients undergoing dental treatment under general anaesthesia. A consecutive clinical sample of parents/caregivers of children receiving dental treatment under general anaesthesia provided data using the Family Impact Scale (FIS) component of the COHQOL(©) Questionnaire. The first questionnaire was completed before treatment, the follow-up questionnaire 1-4 weeks afterward. Treatment-associated changes in the FIS and its components were determined by comparing baseline and follow-up data. Baseline and follow-up data were obtained for 202 and 130 participants, respectively (64.4% follow-up). All FIS items showed large relative decreases in prevalence, the greatest seen in those relating to having sleep disrupted, blaming others, being upset, the child requiring more attention, financial difficulties and having to take time off work. Factor analysis largely confirmed the underlying factor structure, with three sub-scales (parental/family, parental emotions and family conflict) identified. The parental/family and parental emotions sub-scales showed the greatest treatment-associated improvement, with large effect sizes. There was a moderate improvement in scores on the family conflict sub-scale. The overall FIS showed a large improvement. Treating children with severe caries under general anaesthesia results in OHRQoL improvements for the family. Severe dental caries is not merely a restorative and preventive challenge for those who treat children; it has far-reaching effects on those who share the household and care for the affected child.
Kidane, A.; Hepelwa, A.; Tingum, E.; Hu, T.W.
2016-01-01
In this study an attempt is made to compare the efficiency in tobacco leaf production with three other cereals – maize, ground nut and rice – commonly grown by Tanzanian small scale farmers. The paper reviews the prevalence of tobacco use in Africa with that of the developed world; while there was a decline in the latter there appears to be an increase in the former. The economic benefit and costs of tobacco production and consumption in Tanzania are also compared. Using a nationally representative large scale data we were able to observe that modern agricultural inputs allotted to tobacco was much higher than those allotted to maize, ground nut and rice. Using A Frontier Production approach, the study shows that the efficiency of tobacco, maize, groundnuts and rice were 75.3%, 68.5%, 64.5% and 46.5% respectively. Despite the infusion of massive agricultural input allotted to it, tobacco is still 75.3% efficient-tobacco farmers should have produced the same amount by utilizing only 75.3% of realized inputs. The relatively high efficiency in tobacco can only be explained by the large scale allocation of modern agricultural inputs such as fertilizer, better seeds, credit facility and easy access to market. The situation is likely to be reversed if more allocation of inputs were directed to basic food crops such as maize, rice and ground nuts. Tanzania’s policy of food security and poverty alleviation can only be achieved by allocating more modern inputs to basic necessities such as maize and rice. PMID:28124032
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
State-resolved Thermal/Hyperthermal Dynamics of Atmospheric Species
2015-06-23
gas -room temperature ionic liquid (RTIL) interfaces. 2) Large scale trajectory simulations for theoretical analysis of gas - liquid scattering studies...areas: 1) Diode laser and LIF studies of hyperthermal CO2 and NO collisions at the gas -room temperature ionic liquid (RTIL) interfaces. 2) Large...scale trajectory simulations for theoretical analysis of gas - liquid scattering studies, 3) LIF data for state-resolved scattering of hyperthermal NO at
Non-linear scale interactions in a forced turbulent boundary layer
NASA Astrophysics Data System (ADS)
Duvvuri, Subrahmanyam; McKeon, Beverley
2015-11-01
A strong phase-organizing influence exerted by a single synthetic large-scale spatio-temporal mode on directly-coupled (through triadic interactions) small scales in a turbulent boundary layer forced by a spatially-impulsive dynamic wall-roughness patch was previously demonstrated by the authors (J. Fluid Mech. 2015, vol. 767, R4). The experimental set-up was later enhanced to allow for simultaneous forcing of multiple scales in the flow. Results and analysis are presented from a new set of novel experiments where two distinct large scales are forced in the flow by a dynamic wall-roughness patch. The internal non-linear forcing of two other scales with triadic consistency to the artificially forced large scales, corresponding to sum and difference in wavenumbers, is dominated by the latter. This allows for a forcing-response (input-output) type analysis of the two triadic scales, and naturally lends itself to a resolvent operator based model (e.g. McKeon & Sharma, J. Fluid Mech. 2010, vol. 658, pp. 336-382) of the governing Navier-Stokes equations. The support of AFOSR (grant #FA 9550-12-1-0469, program manager D. Smith) is gratefully acknowledged.
Large-scale influences in near-wall turbulence.
Hutchins, Nicholas; Marusic, Ivan
2007-03-15
Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.
Scaling properties of the Arctic sea ice Deformation from Buoy Dispersion Analysis
NASA Astrophysics Data System (ADS)
Weiss, J.; Rampal, P.; Marsan, D.; Lindsay, R.; Stern, H.
2007-12-01
A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over time scales from 3 hours to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of a very large solid plate -the Arctic sea ice cover- stressed by heterogeneous forcing terms like winds and ocean currents. It shows that the sea ice deformation rate depends on the scales of observation following specific space and time scaling laws. These scaling properties share similarities with those observed for turbulent fluids, especially for the ocean and the atmosphere. However, in our case, the time scaling exponent depends on the spatial scale, and the spatial exponent on the temporal scale, which implies a time/space coupling. An analysis of the exponent values shows that Arctic sea ice deformation is very heterogeneous and intermittent whatever the scales, i.e. it cannot be considered as viscous-like, even at very large time and/or spatial scales. Instead, it suggests a deformation accommodated by a multi-scale fracturing/faulting processes.
NASA Astrophysics Data System (ADS)
Sung, Hae-Jin; Go, Byeong-Soo; Jiang, Zhenan; Park, Minwon; Yu, In-Keun
2016-11-01
The development of an effective high-temperature superconducting (HTS) generator is currently a research focus; however, the reduction of heat loss of a large-scale HTS generator is a challenge. This study deals with a heat loss analysis-based design of a 12 MW wind power generator module having an HTS flux pump exciter. The generator module consists of an HTS rotor of the generator and an HTS flux pump exciter. The specifications of the module were described, and the detailed configuration of the module was illustrated. For the heat loss analysis of the module, the excitation loss of the flux pump exciter, eddy current loss of all of the structures in the module, radiation loss, and conduction loss of an HTS coil supporter were assessed using a 3D finite elements method program. In the case of the conduction loss, different types of the supporters were compared to find out the supporter of the lowest conduction loss in the module. The heat loss analysis results of the module were reflected in the design of the generator module and discussed in detail. The results will be applied to the design of large-scale superconducting generators for wind turbines including a cooling system.
DeBoy, Robert T; Mongodin, Emmanuel F; Emerson, Joanne B; Nelson, Karen E
2006-04-01
In the present study, the chromosomes of two members of the Thermotogales were compared. A whole-genome alignment of Thermotoga maritima MSB8 and Thermotoga neapolitana NS-E has revealed numerous large-scale DNA rearrangements, most of which are associated with CRISPR DNA repeats and/or tRNA genes. These DNA rearrangements do not include the putative origin of DNA replication but move within the same replichore, i.e., the same replicating half of the chromosome (delimited by the replication origin and terminus). Based on cumulative GC skew analysis, both the T. maritima and T. neapolitana lineages contain one or two major inverted DNA segments. Also, based on PCR amplification and sequence analysis of the DNA joints that are associated with the major rearrangements, the overall chromosome architecture was found to be conserved at most DNA joints for other strains of T. neapolitana. Taken together, the results from this analysis suggest that the observed chromosomal rearrangements in the Thermotogales likely occurred by successive inversions after their divergence from a common ancestor and before strain diversification. Finally, sequence analysis shows that size polymorphisms in the DNA joints associated with CRISPRs can be explained by expansion and possibly contraction of the DNA repeat and spacer unit, providing a tool for discerning the relatedness of strains from different geographic locations.
Pelvic-floor strength in women with incontinence as assessed by the brink scale.
FitzGerald, Mary P; Burgio, Kathryn L; Borello-France, Diane F; Menefee, Shawn A; Schaffer, Joseph; Kraus, Stephen; Mallett, Veronica T; Xu, Yan
2007-10-01
The purpose of this study was to describe how clinical pelvic-floor muscle (PFM) strength (force-generating capacity) is related to patient characteristics, lower urinary tract symptoms, and fecal incontinence symptoms. Data were obtained from 643 women who were participating in a randomized surgical trial for treatment of stress urinary incontinence. Patient demographic variables, baseline urinary and fecal incontinence symptom questionnaires, urodynamic data and urinary diary data, pad test results, and standardized assessment of pelvic organ support were compared with PFM strength as described by the Brink scoring system. Bivariate analysis of factors associated with the Brink scale score was done using analysis of variance and linear regression. Multivariate analysis included patient variables that were significant on bivariate analysis. The mean Brink scale score was 9 (SD=2) and did not vary widely in this large, but highly select, patient sample. We found a weak, but statistically strong, relationship between age and Brink score. Brink scores were not related to diary and pad test measures of incontinence severity. Overall, PFM strength was good in this sample of women with stress incontinence. Scores tended to be similar, and it is possible that the Brink scale does not reflect real clinical differences in PFM strength.
NASA Astrophysics Data System (ADS)
Zaman, Shakil Bin; Barlat, Frédéric; Kim, Jin Hwan
2018-05-01
Large-scale advanced high strength steel (AHSS) sheet specimens were deformed in uniaxial tension, using a novel grip system mounted on a MTS universal tension machine. After pre-strain, they were used as a pre-strained material to examine the anisotropic response in the biaxial tension tests with various load ratios, and orthogonal tension tests at 45° and 90° from the pre-strain axis. The flow curve and the instantaneous r-value of the pre-strained steel in each of the aforementioned uniaxial testing conditions were also measured and compared with those of the undeformed steel. Furthermore, an exhaustive analysis of the yield surface was also conducted and the results, prior and post-prestrain were represented and compared. The homogeneous anisotropic hardening (HAH) model [1] was employed to predict the behavior of the pre-strained material. It was found that the HAH-predicted flow curves after non-linear strain path change and the yield loci after uniaxial pre-strain were in good agreement with the experiments, while the r-value evolution after strain path change was qualitatively well predicted.
Measuring large-scale vertical motion in the atmosphere with dropsondes
NASA Astrophysics Data System (ADS)
Bony, Sandrine; Stevens, Bjorn
2017-04-01
Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.
Scaling of mode shapes from operational modal analysis using harmonic forces
NASA Astrophysics Data System (ADS)
Brandt, A.; Berardengo, M.; Manzoni, S.; Cigada, A.
2017-10-01
This paper presents a new method for scaling mode shapes obtained by means of operational modal analysis. The method is capable of scaling mode shapes on any structure, also structures with closely coupled modes, and the method can be used in the presence of ambient vibration from traffic or wind loads, etc. Harmonic excitation can be relatively easily accomplished by using general-purpose actuators, also for force levels necessary for driving large structures such as bridges and highrise buildings. The signal processing necessary for mode shape scaling by the proposed method is simple and the method can easily be implemented in most measurement systems capable of generating a sine wave output. The tests necessary to scale the modes are short compared to typical operational modal analysis test time. The proposed method is thus easy to apply and inexpensive relative to some other methods for scaling mode shapes that are available in literature. Although it is not necessary per se, we propose to excite the structure at, or close to, the eigenfrequencies of the modes to be scaled, since this provides better signal-to-noise ratio in the response sensors, thus permitting the use of smaller actuators. An extensive experimental activity on a real structure was carried out and the results reported demonstrate the feasibility and accuracy of the proposed method. Since the method utilizes harmonic excitation for the mode shape scaling, we propose to call the method OMAH.
Hessling, Bernd; Büttner, Knut; Hecker, Michael; Becher, Dörte
2013-01-01
Quantitative LC-MALDI is an underrepresented method, especially in large-scale experiments. The additional fractionation step that is needed for most MALDI-TOF-TOF instruments, the comparatively long analysis time, and the very limited number of established software tools for the data analysis render LC-MALDI a niche application for large quantitative analyses beside the widespread LC–electrospray ionization workflows. Here, we used LC-MALDI in a relative quantification analysis of Staphylococcus aureus for the first time on a proteome-wide scale. Samples were analyzed in parallel with an LTQ-Orbitrap, which allowed cross-validation with a well-established workflow. With nearly 850 proteins identified in the cytosolic fraction and quantitative data for more than 550 proteins obtained with the MASCOT Distiller software, we were able to prove that LC-MALDI is able to process highly complex samples. The good correlation of quantities determined via this method and the LTQ-Orbitrap workflow confirmed the high reliability of our LC-MALDI approach for global quantification analysis. Because the existing literature reports differences for MALDI and electrospray ionization preferences and the respective experimental work was limited by technical or methodological constraints, we systematically compared biochemical attributes of peptides identified with either instrument. This genome-wide, comprehensive study revealed biases toward certain peptide properties for both MALDI-TOF-TOF- and LTQ-Orbitrap-based approaches. These biases are based on almost 13,000 peptides and result in a general complementarity of the two approaches that should be exploited in future experiments. PMID:23788530
Hessling, Bernd; Büttner, Knut; Hecker, Michael; Becher, Dörte
2013-10-01
Quantitative LC-MALDI is an underrepresented method, especially in large-scale experiments. The additional fractionation step that is needed for most MALDI-TOF-TOF instruments, the comparatively long analysis time, and the very limited number of established software tools for the data analysis render LC-MALDI a niche application for large quantitative analyses beside the widespread LC-electrospray ionization workflows. Here, we used LC-MALDI in a relative quantification analysis of Staphylococcus aureus for the first time on a proteome-wide scale. Samples were analyzed in parallel with an LTQ-Orbitrap, which allowed cross-validation with a well-established workflow. With nearly 850 proteins identified in the cytosolic fraction and quantitative data for more than 550 proteins obtained with the MASCOT Distiller software, we were able to prove that LC-MALDI is able to process highly complex samples. The good correlation of quantities determined via this method and the LTQ-Orbitrap workflow confirmed the high reliability of our LC-MALDI approach for global quantification analysis. Because the existing literature reports differences for MALDI and electrospray ionization preferences and the respective experimental work was limited by technical or methodological constraints, we systematically compared biochemical attributes of peptides identified with either instrument. This genome-wide, comprehensive study revealed biases toward certain peptide properties for both MALDI-TOF-TOF- and LTQ-Orbitrap-based approaches. These biases are based on almost 13,000 peptides and result in a general complementarity of the two approaches that should be exploited in future experiments.
Michael D. Ulyshen; James L. Hanula
2009-01-01
Large-scale experimental manipulations of deadwood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large...
Michael Ulyshen; James Hanula
2009-01-01
Large-scale experimentalmanipulations of deadwood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large...
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hueso, R.; Sánchez-Lavega, A.; Gómez-Forrellad, J. M.; Rojas, J. F.; Pérez-Hoyos, S.; Sanz-Requena, J. F.; Peralta, J.; Ordonez-Etxeberria, I.; Chen-Chen, H.; Mendikoa, I.; Peach, D.; Go, C.; Wesley, A.; Miles, P.; Olivetti, T.
2017-09-01
We present an analysis of Jupiter's atmospheric activity over Juno's first year around the planet based on ground-based observations. We present variability of the zonal winds associated to large outbreaks of convective activity at different belts in the planet, a study of short-scale atmospheric waves at low latitudes and examine polar views of the planet that can be compared with JunoCam observations.
Battlespace Awareness: Heterogeneous Sensor Maps of Large Scale, Complex Environments
2017-06-13
reference frames enable a system designer to describe the position of any sensor or platform at any point of time. This section introduces the...analysis to evaluate the quality of reconstructions created by our algorithms. CloudCompare is an open-source tool designed for this purpose [65]. In...structure of the data. The data term seeks to keep the proposed solution (u) similar to the originally observed values ( f ). A systems designer must
Attributes and Behaviors of Performance-Centered Systems.
ERIC Educational Resources Information Center
Gery, Gloria
1995-01-01
Examines attributes, characteristics, and behaviors of performance-centered software packages that are emerging in the consumer software marketplace and compares them with large-scale systems software being designed by internal information systems staffs and vendors of large-scale software designed for financial, manufacturing, processing, and…
NASA Astrophysics Data System (ADS)
Volwerk, Martin; Goetz, Charlotte; Richter, Ingo; Delva, Magda; Ostaszewski, Katharina; Schwingenschuh, Konrad; Glassmeier, Karl-Heinz
2018-06-01
Context. The Rosetta Plasma Consortium (RPC) magnetometer (MAG) data during the tail excursion in March-April 2016 are used to investigate the magnetic structure of and activity in the tail region of the weakly outgassing comet 67P/Churyumov-Gerasimenko (67P). Aims: The goal of this study is to compare the large scale (near) tail structure with that of earlier missions to strong outgassing comets, and the small scale turbulent energy cascade (un)related to the singing comet phenomenon. Methods: The usual methods of space plasma physics are used to analyse the magnetometer data, such as minimum variance analysis, spectral analysis, and power law fitting. Also the cone angle and clock angle of the magnetic field are calculated to interpret the data. Results: It is found that comet 67P does not have a classical draped magnetic field and no bi-lobal tail structure at this late stage of the mission when the comet is already at 2.7 AU distance from the Sun. The main magnetic field direction seems to be more across the tail direction, which may implicate an asymmetric pick-up cloud. During periods of singing comet activity the propagation direction of the waves is at large angles with respect to the magnetic field and to the radial direction towards the comet. Turbulent cascade of magnetic energy from large to small scales is different in the presence of singing as without it.
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ruchika Mehra Vijayan, E.
2017-11-01
This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
Large-scale data analysis of power grid resilience across multiple US service regions
NASA Astrophysics Data System (ADS)
Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert
2016-05-01
Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.
Groen, Yvonne; Fuermaier, Anselm B M; Tucha, Lara; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Oliver
2018-03-14
This study describes the development and utility of a new self-report measure of attentional capacities of adults with Attention Deficit Hyperactivity Disorder (ADHD): the Everyday Life Attention Scale (ELAS). Different from previous attention scales, attentional capacities are rated for nine everyday situations. Study 1 investigated the factor structure, validity, and reliability of the ELAS in 1206 healthy participants. Confirmatory factor analysis supported a situation-specific approach which categorizes everyday attention into nine situation scales: Reading, Movie, Activity, Lecture, Conversation, Assignment, Cooking, Cleaning up, and Driving. Each scale was composed of ratings for sustained, focused, selective, and divided attention as well as motivation, and had good internal consistency. Most scales showed weak correlations with ADHD Symptoms, Executive Functioning, and Memory Efficacy. Study 2 further investigated the sensitivity of the ELAS in 80 adults with ADHD compared to matched healthy controls and a mixed clinical group of 56 patients diagnosed with other psychiatric disorders. Compared to healthy controls, patients with ADHD reported reduced attentional capacities with large effect sizes on all situation scales and had a substantially higher number of situations with impaired attention scores. The ELAS may become useful in the clinical evaluation of ADHD and related psychiatric disorders in adults.
NASA Astrophysics Data System (ADS)
Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.
2018-04-01
Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.
NASA Astrophysics Data System (ADS)
Pey Tee, Oon; Subramaniam, R.
2018-02-01
We report here on a comparative study of middle school students' attitudes towards science involving three countries: England, Singapore and the U.S.A. Complete attitudinal data sets from TIMSS (Trends in International Mathematics and Science Study) 2011 were used, thus giving a very large sample size (N = 20,246), compared to other studies in the journal literature. The Rasch model was used to analyse the data, and the findings have shed some useful light on not only how the Western and Asian students responded on a comparative basis in the various scales related to attitudes but also on the validity, reliability, and unidimensionality of the attitudes instrument used in TIMSS 2011. There may be a need for TIMSS test developers to consider doing away with negatively phrased items in the attitudes instrument and phrasing these positively as the Rasch framework shows that response bias is associated with these statements.
A comparative study of two prediction models for brain tumor progression
NASA Astrophysics Data System (ADS)
Zhou, Deqi; Tran, Loc; Wang, Jihong; Li, Jiang
2015-03-01
MR diffusion tensor imaging (DTI) technique together with traditional T1 or T2 weighted MRI scans supplies rich information sources for brain cancer diagnoses. These images form large-scale, high-dimensional data sets. Due to the fact that significant correlations exist among these images, we assume low-dimensional geometry data structures (manifolds) are embedded in the high-dimensional space. Those manifolds might be hidden from radiologists because it is challenging for human experts to interpret high-dimensional data. Identification of the manifold is a critical step for successfully analyzing multimodal MR images. We have developed various manifold learning algorithms (Tran et al. 2011; Tran et al. 2013) for medical image analysis. This paper presents a comparative study of an incremental manifold learning scheme (Tran. et al. 2013) versus the deep learning model (Hinton et al. 2006) in the application of brain tumor progression prediction. The incremental manifold learning is a variant of manifold learning algorithm to handle large-scale datasets in which a representative subset of original data is sampled first to construct a manifold skeleton and remaining data points are then inserted into the skeleton by following their local geometry. The incremental manifold learning algorithm aims at mitigating the computational burden associated with traditional manifold learning methods for large-scale datasets. Deep learning is a recently developed multilayer perceptron model that has achieved start-of-the-art performances in many applications. A recent technique named "Dropout" can further boost the deep model by preventing weight coadaptation to avoid over-fitting (Hinton et al. 2012). We applied the two models on multiple MRI scans from four brain tumor patients to predict tumor progression and compared the performances of the two models in terms of average prediction accuracy, sensitivity, specificity and precision. The quantitative performance metrics were calculated as average over the four patients. Experimental results show that both the manifold learning and deep neural network models produced better results compared to using raw data and principle component analysis (PCA), and the deep learning model is a better method than manifold learning on this data set. The averaged sensitivity and specificity by deep learning are comparable with these by the manifold learning approach while its precision is considerably higher. This means that the predicted abnormal points by deep learning are more likely to correspond to the actual progression region.
Minor, K S; Willits, J A; Marggraf, M P; Jones, M N; Lysaker, P H
2018-04-25
Conveying information cohesively is an essential element of communication that is disrupted in schizophrenia. These disruptions are typically expressed through disorganized symptoms, which have been linked to neurocognitive, social cognitive, and metacognitive deficits. Automated analysis can objectively assess disorganization within sentences, between sentences, and across paragraphs by comparing explicit communication to a large text corpus. Little work in schizophrenia has tested: (1) links between disorganized symptoms measured via automated analysis and neurocognition, social cognition, or metacognition; and (2) if automated analysis explains incremental variance in cognitive processes beyond clinician-rated scales. Disorganization was measured in schizophrenia (n = 81) with Coh-Metrix 3.0, an automated program that calculates basic and complex language indices. Trained staff also assessed neurocognition, social cognition, metacognition, and clinician-rated disorganization. Findings showed that all three cognitive processes were significantly associated with at least one automated index of disorganization. When automated analysis was compared with a clinician-rated scale, it accounted for significant variance in neurocognition and metacognition beyond the clinician-rated measure. When combined, these two methods explained 28-31% of the variance in neurocognition, social cognition, and metacognition. This study illustrated how automated analysis can highlight the specific role of disorganization in neurocognition, social cognition, and metacognition. Generally, those with poor cognition also displayed more disorganization in their speech-making it difficult for listeners to process essential information needed to tie the speaker's ideas together. Our findings showcase how implementing a mixed-methods approach in schizophrenia can explain substantial variance in cognitive processes.
Development of the Large-Scale Forcing Data to Support MC3E Cloud Modeling Studies
NASA Astrophysics Data System (ADS)
Xie, S.; Zhang, Y.
2011-12-01
The large-scale forcing fields (e.g., vertical velocity and advective tendencies) are required to run single-column and cloud-resolving models (SCMs/CRMs), which are the two key modeling frameworks widely used to link field data to climate model developments. In this study, we use an advanced objective analysis approach to derive the required forcing data from the soundings collected by the Midlatitude Continental Convective Cloud Experiment (MC3E) in support of its cloud modeling studies. MC3E is the latest major field campaign conducted during the period 22 April 2011 to 06 June 2011 in south-central Oklahoma through a joint effort between the DOE ARM program and the NASA Global Precipitation Measurement Program. One of its primary goals is to provide a comprehensive dataset that can be used to describe the large-scale environment of convective cloud systems and evaluate model cumulus parameterizations. The objective analysis used in this study is the constrained variational analysis method. A unique feature of this approach is the use of domain-averaged surface and top-of-the atmosphere (TOA) observations (e.g., precipitation and radiative and turbulent fluxes) as constraints to adjust atmospheric state variables from soundings by the smallest possible amount to conserve column-integrated mass, moisture, and static energy so that the final analysis data is dynamically and thermodynamically consistent. To address potential uncertainties in the surface observations, an ensemble forcing dataset will be developed. Multi-scale forcing will be also created for simulating various scale convective systems. At the meeting, we will provide more details about the forcing development and present some preliminary analysis of the characteristics of the large-scale forcing structures for several selected convective systems observed during MC3E.
Knowledge Support and Automation for Performance Analysis with PerfExplorer 2.0
Huck, Kevin A.; Malony, Allen D.; Shende, Sameer; ...
2008-01-01
The integration of scalable performance analysis in parallel development tools is difficult. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information. Simply to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores requires new scalable analysis techniques. Furthermore, many exploratory analysis processes are repeatable and could be automated, but are now implemented as manual procedures. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis ofmore » individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.« less
Large scale anomalies in the microwave background: causation and correlation.
Aslanyan, Grigor; Easther, Richard
2013-12-27
Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.
Network analysis of mesoscale optical recordings to assess regional, functional connectivity.
Lim, Diana H; LeDue, Jeffrey M; Murphy, Timothy H
2015-10-01
With modern optical imaging methods, it is possible to map structural and functional connectivity. Optical imaging studies that aim to describe large-scale neural connectivity often need to handle large and complex datasets. In order to interpret these datasets, new methods for analyzing structural and functional connectivity are being developed. Recently, network analysis, based on graph theory, has been used to describe and quantify brain connectivity in both experimental and clinical studies. We outline how to apply regional, functional network analysis to mesoscale optical imaging using voltage-sensitive-dye imaging and channelrhodopsin-2 stimulation in a mouse model. We include links to sample datasets and an analysis script. The analyses we employ can be applied to other types of fluorescence wide-field imaging, including genetically encoded calcium indicators, to assess network properties. We discuss the benefits and limitations of using network analysis for interpreting optical imaging data and define network properties that may be used to compare across preparations or other manipulations such as animal models of disease.
Wang, Jinghong; Lo, Siuming; Wang, Qingsong; Sun, Jinhua; Mu, Honglin
2013-08-01
Crowd density is a key factor that influences the moving characteristics of a large group of people during a large-scale evacuation. In this article, the macro features of crowd flow and subsequent rescue strategies were considered, and a series of characteristic crowd densities that affect large-scale people movement, as well as the maximum bearing density when the crowd is extremely congested, were analyzed. On the basis of characteristic crowd densities, the queuing theory was applied to simulate crowd movement. Accordingly, the moving characteristics of the crowd and the effects of typical crowd density-which is viewed as the representation of the crowd's arrival intensity in front of the evacuation passageways-on rescue strategies was studied. Furthermore, a "risk axle of crowd density" is proposed to determine the efficiency of rescue strategies in a large-scale evacuation, i.e., whether the rescue strategies are able to effectively maintain or improve evacuation efficiency. Finally, through some rational hypotheses for the value of evacuation risk, a three-dimensional distribution of the evacuation risk is established to illustrate the risk axle of crowd density. This work aims to make some macro, but original, analysis on the risk of large-scale crowd evacuation from the perspective of the efficiency of rescue strategies. © 2012 Society for Risk Analysis.
Flood protection diversification to reduce probabilities of extreme losses.
Zhou, Qian; Lambert, James H; Karvetski, Christopher W; Keisler, Jeffrey M; Linkov, Igor
2012-11-01
Recent catastrophic losses because of floods require developing resilient approaches to flood risk protection. This article assesses how diversification of a system of coastal protections might decrease the probabilities of extreme flood losses. The study compares the performance of portfolios each consisting of four types of flood protection assets in a large region of dike rings. A parametric analysis suggests conditions in which diversifications of the types of included flood protection assets decrease extreme flood losses. Increased return periods of extreme losses are associated with portfolios where the asset types have low correlations of economic risk. The effort highlights the importance of understanding correlations across asset types in planning for large-scale flood protection. It allows explicit integration of climate change scenarios in developing flood mitigation strategy. © 2012 Society for Risk Analysis.
Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F
2014-10-07
Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.
Analysis of detection performance of multi band laser beam analyzer
NASA Astrophysics Data System (ADS)
Du, Baolin; Chen, Xiaomei; Hu, Leili
2017-10-01
Compared with microwave radar, Laser radar has high resolution, strong anti-interference ability and good hiding ability, so it becomes the focus of laser technology engineering application. A large scale Laser radar cross section (LRCS) measurement system is designed and experimentally tested. First, the boundary conditions are measured and the long range laser echo power is estimated according to the actual requirements. The estimation results show that the echo power is greater than the detector's response power. Secondly, a large scale LRCS measurement system is designed according to the demonstration and estimation. The system mainly consists of laser shaping, beam emitting device, laser echo receiving device and integrated control device. Finally, according to the designed lidar cross section measurement system, the scattering cross section of target is simulated and tested. The simulation results are basically the same as the test results, and the correctness of the system is proved.
Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.
Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C
2018-06-01
Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.
Wright, Stuart J; Vass, Caroline M; Sim, Gene; Burton, Michael; Fiebig, Denzil G; Payne, Katherine
2018-02-28
Scale heterogeneity, or differences in the error variance of choices, may account for a significant amount of the observed variation in the results of discrete choice experiments (DCEs) when comparing preferences between different groups of respondents. The aim of this study was to identify if, and how, scale heterogeneity has been addressed in healthcare DCEs that compare the preferences of different groups. A systematic review identified all healthcare DCEs published between 1990 and February 2016. The full-text of each DCE was then screened to identify studies that compared preferences using data generated from multiple groups. Data were extracted and tabulated on year of publication, samples compared, tests for scale heterogeneity, and analytical methods to account for scale heterogeneity. Narrative analysis was used to describe if, and how, scale heterogeneity was accounted for when preferences were compared. A total of 626 healthcare DCEs were identified. Of these 199 (32%) aimed to compare the preferences of different groups specified at the design stage, while 79 (13%) compared the preferences of groups identified at the analysis stage. Of the 278 included papers, 49 (18%) discussed potential scale issues, 18 (7%) used a formal method of analysis to account for scale between groups, and 2 (1%) accounted for scale differences between preference groups at the analysis stage. Scale heterogeneity was present in 65% (n = 13) of studies that tested for it. Analytical methods to test for scale heterogeneity included coefficient plots (n = 5, 2%), heteroscedastic conditional logit models (n = 6, 2%), Swait and Louviere tests (n = 4, 1%), generalised multinomial logit models (n = 5, 2%), and scale-adjusted latent class analysis (n = 2, 1%). Scale heterogeneity is a prevalent issue in healthcare DCEs. Despite this, few published DCEs have discussed such issues, and fewer still have used formal methods to identify and account for the impact of scale heterogeneity. The use of formal methods to test for scale heterogeneity should be used, otherwise the results of DCEs potentially risk producing biased and potentially misleading conclusions regarding preferences for aspects of healthcare.
Graham, Jay P; Leibler, Jessica H; Price, Lance B; Otte, Joachim M; Pfeiffer, Dirk U; Tiensin, T; Silbergeld, Ellen K
2008-01-01
Understanding interactions between animals and humans is critical in preventing outbreaks of zoonotic disease. This is particularly important for avian influenza. Food animal production has been transformed since the 1918 influenza pandemic. Poultry and swine production have changed from small-scale methods to industrial-scale operations. There is substantial evidence of pathogen movement between and among these industrial facilities, release to the external environment, and exposure to farm workers, which challenges the assumption that modern poultry production is more biosecure and biocontained as compared with backyard or small holder operations in preventing introduction and release of pathogens. An analysis of data from the Thai government investigation in 2004 indicates that the odds of H5N1 outbreaks and infections were significantly higher in large-scale commercial poultry operations as compared with backyard flocks. These data suggest that successful strategies to prevent or mitigate the emergence of pandemic avian influenza must consider risk factors specific to modern industrialized food animal production.
Scaling properties of sea ice deformation from buoy dispersion analysis
NASA Astrophysics Data System (ADS)
Rampal, P.; Weiss, J.; Marsan, D.; Lindsay, R.; Stern, H.
2008-03-01
A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over timescales from 3 h to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of a very large solid plate (the Arctic sea ice cover) stressed by heterogeneous forcing terms like winds and ocean currents. It shows that the sea ice deformation rate depends on the scales of observation following specific space and time scaling laws. These scaling properties share similarities with those observed for turbulent fluids, especially for the ocean and the atmosphere. However, in our case, the time scaling exponent depends on the spatial scale, and the spatial exponent on the temporal scale, which implies a time/space coupling. An analysis of the exponent values shows that Arctic sea ice deformation is very heterogeneous and intermittent whatever the scales, i.e., it cannot be considered as viscous-like, even at very large time and/or spatial scales. Instead, it suggests a deformation accommodated by a multiscale fracturing/faulting processes.
ERIC Educational Resources Information Center
Alexopoulou, Theodora; Michel, Marije; Murakami, Akira; Meurers, Detmar
2017-01-01
Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a…
NASA Astrophysics Data System (ADS)
Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.
2014-09-01
Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.
Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena
2015-01-01
We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.
Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus
2016-01-01
The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach “Task-related Edge Density” (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function. PMID:27341204
Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus
2016-01-01
The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach "Task-related Edge Density" (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function.
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
NASA Technical Reports Server (NTRS)
Africano, J.; De Vaucouleurs, G.; Evans, D. S.; Finkel, B. E.; Nather, R. E.; Palm, C.; Silverberg, E.; Wiant, J.; Hubbard, W. B.; Jokipii, J. R.
1977-01-01
An analysis of observations of the occultation of Epsilon Gem by Mars on April 8, 1976, is presented. The data were obtained by three neighboring telescopes at McDonald Observatory. Intensity fluctuations on time scales of the order of 100 ms were observed simultaneously at the three telescopes. As the observations compare well with predictions of turbulent scintillation theory, it is concluded that such fluctuations were probably largely the effect of stellar scintillations in the Martian atmosphere. The stellar diameter is included as a parameter in the theory but in a way which differs from previously published interpretations of occultations of extended sources by planetary atmospheres. Scintillations govern the experimental uncertainty in the deduction of the scale height of the high Martian atmosphere. A density scale height of 9.9 + or - 2.5 km is obtained at an altitude of 74 + or - 8 km above the mean surface. For CO 2 gas, this result corresponds to a temperature of 190 + or - 50 K.
Utilizing Wavelet Analysis to assess hydrograph change in northwestern North America
NASA Astrophysics Data System (ADS)
Tang, W.; Carey, S. K.
2017-12-01
Historical streamflow data in the mountainous regions of northwestern North America suggest that changes flows are driven by warming temperature, declining snowpack and glacier extent, and large-scale teleconnections. However, few sites exist that have robust long-term records for statistical analysis, and pervious research has focussed on high and low-flow indices along with trend analysis using Mann-Kendal test and other similar approaches. Furthermore, there has been less emphasis on ascertaining the drivers of change in changes in shape of the streamflow hydrograph compared with traditional flow metrics. In this work, we utilize wavelet analysis to evaluate changes in hydrograph characteristics for snowmelt driven rivers in northwestern North America across a range of scales. Results suggest that wavelets can be used to detect a lengthening and advancement of freshet with a corresponding decline in peak flows. Furthermore, the gradual transition of flows from nival to pluvial regimes in more southerly catchments is evident in the wavelet spectral power through time. This method of change detection is challenged by evaluating the statistical significance of changes in wavelet spectra as related to hydrograph form, yet ongoing work seeks to link these patters to driving weather and climate along with larger scale teleconnections.
The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation
NASA Astrophysics Data System (ADS)
Noh, Yookyung
The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.
Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround
ERIC Educational Resources Information Center
Peurach, Donald J.; Neumerski, Christine M.
2015-01-01
The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
Self-Organized Evolution of Sandy Coastline Shapes: Connections with Shoreline Erosion Problems
NASA Astrophysics Data System (ADS)
Murray, A. B.; Ashton, A.
2002-12-01
Landward movement of the shoreline severely impacts property owners and communities where structures and infrastructure are built near the coast. While sea level rise will increase the average rate of coastal erosion, even a slight gradient in wave-driven alongshore sediment flux will locally overwhelm that effect, causing either shoreline accretion or enhanced erosion. Recent analysis shows that because of the nonlinear relationship between alongshore sediment flux and the angle between deep water wave crests and local shoreline orientation, in some wave climates a straight coastline is unstable (Ashton et al., Nature, 2001). When deep-water waves approach from angles greater than the one that maximizes alongshore flux, in concave-seaward shoreline segments sediment flux will diverge, causing erosion. Similarly, convex regions such as the crests of perturbations on an otherwise straight shoreline will experience accretion; perturbations will grow. When waves approach from smaller angles, the sign of the relationship between shoreline curvature and shoreline change is reversed, but any deviation from a perfectly straight coastline will still result in alongshore-inhomogeneous shoreline change. A numerical model designed to explore the long-term effects of this instability operating over a spatially extended alongshore domain has shown that as perturbations grow to finite amplitude and interact with each other, large-scale coastline structures can emerge. The character of the local and non-local interactions, and the resulting emergent structures, depends on the wave climate. The 100-km scale capes and cuspate forelands that form much of the coast of the Carolinas, USA, provides one possible natural example. Our modeling suggests that on such a shoreline, continued interactions between large-scale structures will cause continued large-scale change in coastline shape. Consequently, some coastline segments will tend to experience accentuated erosion. Communities established in these areas face discouraging future prospects. Attempts can be made to arrest the shoreline retreat on large scales-for example through large beach nourishment projects or policies that allow pervasive hard stabilization (e.g. seawall, jetties) along a coastline segment. However, even if such attempts are successful for a significant period of time, the pinning in place of some parts of an otherwise dynamic system will change the large-scale evolution of the coastline, altering the future erosion/accretion experienced at other, perhaps distant, locations. Simple properties of alongshore sediment transport could also be relevant to alongshore-inhomogeneous shoreline change (including erosion 'hot spots') on shorter time scales and smaller spatial scales. We are comparing predictions arising from the modeling, and from analysis of alongshore transport as a function of shoreline orientation, to recent observations of shoreline change ranging across spatial scales from 100s of meters to 10s of kilometers, and time scales from days to decades (List and Farris, Coastal Sediments,1999; Tebbens et al., PNAS, 2002). Considering that many other processes and factors can also influence shoreline change, initial results show a surprising degree of correlation between observations and predictions.
Costs and cost-effectiveness of vector control in Eritrea using insecticide-treated bed nets.
Yukich, Joshua O; Zerom, Mehari; Ghebremeskel, Tewolde; Tediosi, Fabrizio; Lengeler, Christian
2009-03-30
While insecticide-treated nets (ITNs) are a recognized effective method for preventing malaria, there has been an extensive debate in recent years about the best large-scale implementation strategy. Implementation costs and cost-effectiveness are important elements to consider when planning ITN programmes, but so far little information on these aspects is available from national programmes. This study uses a standardized methodology, as part of a larger comparative study, to collect cost data and cost-effectiveness estimates from a large programme providing ITNs at the community level and ante-natal care facilities in Eritrea. This is a unique model of ITN implementation fully integrated into the public health system. Base case analysis results indicated that the average annual cost of ITN delivery (2005 USD 3.98) was very attractive when compared with past ITN delivery studies at different scales. Financing was largely from donor sources though the Eritrean government and net users also contributed funding. The intervention's cost-effectiveness was in a highly attractive range for sub-Saharan Africa. The cost per DALY averted was USD 13 - 44. The cost per death averted was USD 438-1449. Distribution of nets coincided with significant increases in coverage and usage of nets nationwide, approaching or exceeding international targets in some areas. ITNs can be cost-effectively delivered at a large scale in sub-Saharan Africa through a distribution system that is highly integrated into the health system. Operating and sustaining such a system still requires strong donor funding and support as well as a functional and extensive system of health facilities and community health workers already in place.
Harada, Sei; Hirayama, Akiyoshi; Chan, Queenie; Kurihara, Ayako; Fukai, Kota; Iida, Miho; Kato, Suzuka; Sugiyama, Daisuke; Kuwabara, Kazuyo; Takeuchi, Ayano; Akiyama, Miki; Okamura, Tomonori; Ebbels, Timothy M D; Elliott, Paul; Tomita, Masaru; Sato, Asako; Suzuki, Chizuru; Sugimoto, Masahiro; Soga, Tomoyoshi; Takebayashi, Toru
2018-01-01
Cohort studies with metabolomics data are becoming more widespread, however, large-scale studies involving 10,000s of participants are still limited, especially in Asian populations. Therefore, we started the Tsuruoka Metabolomics Cohort Study enrolling 11,002 community-dwelling adults in Japan, and using capillary electrophoresis-mass spectrometry (CE-MS) and liquid chromatography-mass spectrometry. The CE-MS method is highly amenable to absolute quantification of polar metabolites, however, its reliability for large-scale measurement is unclear. The aim of this study is to examine reproducibility and validity of large-scale CE-MS measurements. In addition, the study presents absolute concentrations of polar metabolites in human plasma, which can be used in future as reference ranges in a Japanese population. Metabolomic profiling of 8,413 fasting plasma samples were completed using CE-MS, and 94 polar metabolites were structurally identified and quantified. Quality control (QC) samples were injected every ten samples and assessed throughout the analysis. Inter- and intra-batch coefficients of variation of QC and participant samples, and technical intraclass correlation coefficients were estimated. Passing-Bablok regression of plasma concentrations by CE-MS on serum concentrations by standard clinical chemistry assays was conducted for creatinine and uric acid. In QC samples, coefficient of variation was less than 20% for 64 metabolites, and less than 30% for 80 metabolites out of the 94 metabolites. Inter-batch coefficient of variation was less than 20% for 81 metabolites. Estimated technical intraclass correlation coefficient was above 0.75 for 67 metabolites. The slope of Passing-Bablok regression was estimated as 0.97 (95% confidence interval: 0.95, 0.98) for creatinine and 0.95 (0.92, 0.96) for uric acid. Compared to published data from other large cohort measurement platforms, reproducibility of metabolites common to the platforms was similar to or better than in the other studies. These results show that our CE-MS platform is suitable for conducting large-scale epidemiological studies.
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
Participation in International Large-Scale Assessments from a US Perspective
ERIC Educational Resources Information Center
Plisko, Valena White
2013-01-01
International large-scale assessments (ILSAs) play a distinct role in the United States' decentralized federal education system. Separate from national and state assessments, they offer an external, objective measure for the United States to assess student performance comparatively with other countries and over time. The US engagement in ILSAs…
NASA Astrophysics Data System (ADS)
Yang, Lei; Chen, Liding; Wei, Wei
2017-04-01
Soil water stored below rainfall infiltration depth is a reliable water resource for plant growth in arid and semi-arid regions. For decreasing serious soil erosion, large-scale human-introduced vegetation restoration was initiated in Chinese Loess Plateau in late 1990s. However, these activities may result in excessive water consumption and soil water deficit if no appropriate scientific guidance were offered. This in turn impacts the regional ecological restoration and sustainable management of water resources. In this study, soil water content data in depth of 0-5 m was obtained by long-term field observation and geostatistical method in 6 small watersheds covered with different land use pattern. Profile characteristics and spatial-temporal patterns of soil water were compared between different land use types, hillslopes, and watersheds. The results showed that: (1) Introduced vegetation consumed excessive amount of water when compared with native grassland and farmland, and induced temporally stable soil desiccation in depth of 0-5 m. The introduced vegetation decreased soil water content to levels lower than the reference value representing no human impact in all soil layers. (2) The analysis of differences in soil water at hillslope and watershed scales indicated that land use determined the spatial and temporal variability of soil water. Soil water at watershed scale increased with the increasing area of farmland, and decreased with increasing percentage of introduced vegetation. Land use structure determined the soil water condition and land use pattern determined the spatial-temporal variability of soil water at watershed scale. (3) Large-scale revegetation with introduced vegetation diminished the spatial heterogeneity of soil water at different scales. Land use pattern adjustment could be used to improve the water resources management and maintain the sustainability of vegetation restoration.
Santos, Xavier; Felicísimo, Ángel M.
2016-01-01
Ecological Niche Models (ENMs) are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models). Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude) were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species) and taxonomy (amphibians and reptiles). Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural parks. PMID:27761304
Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.
Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K
2014-07-07
Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this detector compared to FPIs. Optical characterization, x-ray contrast measurements and theoretical DQE evaluation suggest that a trade off can be found between the need of a large imaging area and the requirement of a uniform imaging performance, making the DynAMITe large area CMOS APS suitable for a range of bio-medical applications.
Investigation of aquifer-estuary interaction using wavelet analysis of fiber-optic temperature data
Henderson, R.D.; Day-Lewis, Frederick D.; Harvey, Charles F.
2009-01-01
Fiber-optic distributed temperature sensing (FODTS) provides sub-minute temporal and meter-scale spatial resolution over kilometer-long cables. Compared to conventional thermistor or thermocouple-based technologies, which measure temperature at discrete (and commonly sparse) locations, FODTS offers nearly continuous spatial coverage, thus providing hydrologic information at spatiotemporal scales previously impossible. Large and information-rich FODTS datasets, however, pose challenges for data exploration and analysis. To date, FODTS analyses have focused on time-series variance as the means to discriminate between hydrologic phenomena. Here, we demonstrate the continuous wavelet transform (CWT) and cross-wavelet transform (XWT) to analyze FODTS in the context of related hydrologic time series. We apply the CWT and XWT to data from Waquoit Bay, Massachusetts to identify the location and timing of tidal pumping of submarine groundwater.
NASA Astrophysics Data System (ADS)
Spero, Tanya L.; Otte, Martin J.; Bowden, Jared H.; Nolte, Christopher G.
2014-10-01
Spectral nudging—a scale-selective interior constraint technique—is commonly used in regional climate models to maintain consistency with large-scale forcing while permitting mesoscale features to develop in the downscaled simulations. Several studies have demonstrated that spectral nudging improves the representation of regional climate in reanalysis-forced simulations compared with not using nudging in the interior of the domain. However, in the Weather Research and Forecasting (WRF) model, spectral nudging tends to produce degraded precipitation simulations when compared to analysis nudging—an interior constraint technique that is scale indiscriminate but also operates on moisture fields which until now could not be altered directly by spectral nudging. Since analysis nudging is less desirable for regional climate modeling because it dampens fine-scale variability, changes are proposed to the spectral nudging methodology to capitalize on differences between the nudging techniques and aim to improve the representation of clouds, radiation, and precipitation without compromising other fields. These changes include adding spectral nudging toward moisture, limiting nudging to below the tropopause, and increasing the nudging time scale for potential temperature, all of which collectively improve the representation of mean and extreme precipitation, 2 m temperature, clouds, and radiation, as demonstrated using a model-simulated 20 year historical period. Such improvements to WRF may increase the fidelity of regional climate data used to assess the potential impacts of climate change on human health and the environment and aid in climate change mitigation and adaptation studies.
Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch
2011-11-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.
Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.
2014-01-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences. PMID:25392599
A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.
Halloran, John T; Rocke, David M
2018-05-04
Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .
Principal Component Analysis Based Measure of Structural Holes
NASA Astrophysics Data System (ADS)
Deng, Shiguo; Zhang, Wenqing; Yang, Huijie
2013-02-01
Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.
NASA Astrophysics Data System (ADS)
Huang, Yan; Liu, Hongxing; Hinkel, Kenneth; Yu, Bailang; Beck, Richard; Wu, Jianping
2017-11-01
The Arctic coastal plain is covered with numerous thermokarst lakes. These lakes are closely linked to climate and environmental change through their heat and water budgets. We examined the intralake thermal structure at the local scale and investigated the water temperature pattern of lakes at the regional scale by utilizing extensive in situ measurements and multidate Landsat-8 remote sensing data. Our analysis indicates that the lake skin temperatures derived from satellite thermal sensors during most of the ice-free summer period effectively represent the lake bulk temperature because the lakes are typically well-mixed and without significant vertical stratification. With the relatively high-resolution Landsat-8 thermal data, we were able to quantitatively examine intralake lateral temperature differences and gradients in relation to geographical location, topography, meteorological factors, and lake morphometry for the first time. Our results suggest that wind speed and direction not only control the vertical stratification but also influences lateral differences and gradients of lake surface temperature. Wind can considerably reduce the intralake temperature gradient. Interestingly, we found that geographical location (latitude, longitude, distance to the ocean) and lake morphometry (surface size, depth, volume) not only control lake temperature regionally but also affect the lateral temperature gradient and homogeneity level within each individual lake. For the Arctic coastal plain, at regional scales, inland and southern lakes tend to have larger horizontal temperature differences and gradients compared to coastal and northern lakes. At local scales, large and shallow lakes tend to have large lateral temperature differences relative to small and deep lakes.
1983-07-01
TEST CHART NATIONAL BVIREAU OF StANARS-1963- I AQUATIC PLANT CONTROL RESEARCH PROGRAM TECHNICAL REPORT A-78-2 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF...Waterways Experiment Station P. 0. Box 631, Vicksburg, Miss. 39180 83 11 01 018 - I ., lit I III I | LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE...No. 3. RECIPIENT’S CATALOG NUMBER Technical Report A-78-2 Aa 1 Lj 19 ________5!1___ A. TITLE (Ad Subtitle) LARGE-SCALE OPERATIONS MANAGEMENT S. TYPE
Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions
NASA Technical Reports Server (NTRS)
Grujic, L. T.; Siljak, D. D.
1973-01-01
The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.
NASA Astrophysics Data System (ADS)
Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo; Lecci, Rita; Mossa, Michele
2017-01-01
SANIFS (Southern Adriatic Northern Ionian coastal Forecasting System) is a coastal-ocean operational system based on the unstructured grid finite-element three-dimensional hydrodynamic SHYFEM model, providing short-term forecasts. The operational chain is based on a downscaling approach starting from the large-scale system for the entire Mediterranean Basin (MFS, Mediterranean Forecasting System), which provides initial and boundary condition fields to the nested system. The model is configured to provide hydrodynamics and active tracer forecasts both in open ocean and coastal waters of southeastern Italy using a variable horizontal resolution from the open sea (3-4 km) to coastal areas (50-500 m). Given that the coastal fields are driven by a combination of both local (also known as coastal) and deep-ocean forcings propagating along the shelf, the performance of SANIFS was verified both in forecast and simulation mode, first (i) on the large and shelf-coastal scales by comparing with a large-scale survey CTD (conductivity-temperature-depth) in the Gulf of Taranto and then (ii) on the coastal-harbour scale (Mar Grande of Taranto) by comparison with CTD, ADCP (acoustic doppler current profiler) and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 6.5 km). The SANIFS forecasts at a lead time of 1 day were compared with the MFS forecasts, highlighting that SANIFS is able to retain the large-scale dynamics of MFS. The large-scale dynamics of MFS are correctly propagated to the shelf-coastal scale, improving the forecast accuracy (+17 % for temperature and +6 % for salinity compared to MFS). Moreover, the added value of SANIFS was assessed on the coastal-harbour scale, which is not covered by the coarse resolution of MFS, where the fields forecasted by SANIFS reproduced the observations well (temperature RMSE equal to 0.11 °C). Furthermore, SANIFS simulations were compared with hourly time series of temperature, sea level and velocity measured on the coastal-harbour scale, showing a good agreement. Simulations in the Gulf of Taranto described a circulation mainly characterized by an anticyclonic gyre with the presence of cyclonic vortexes in shelf-coastal areas. A surface water inflow from the open sea to Mar Grande characterizes the coastal-harbour scale.
NASA Astrophysics Data System (ADS)
Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.
2016-04-01
We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.
Thin-plate spline analysis of the cranial base in subjects with Class III malocclusion.
Singh, G D; McNamara, J A; Lozanoff, S
1997-08-01
The role of the cranial base in the emergence of Class III malocclusion is not fully understood. This study determines deformations that contribute to a Class III cranial base morphology, employing thin-plate spline analysis on lateral cephalographs. A total of 73 children of European-American descent aged between 5 and 11 years of age with Class III malocclusion were compared with an equivalent group of subjects with a normal, untreated, Class I molar occlusion. The cephalographs were traced, checked and subdivided into seven age- and sex-matched groups. Thirteen points on the cranial base were identified and digitized. The datasets were scaled to an equivalent size, and statistical analysis indicated significant differences between average Class I and Class III cranial base morphologies for each group. Thin-plate spline analysis indicated that both affine (uniform) and non-affine transformations contribute toward the total spline for each average cranial base morphology at each age group analysed. For non-affine transformations, Partial warps 10, 8 and 7 had high magnitudes, indicating large-scale deformations affecting Bolton point, basion, pterygo-maxillare, Ricketts' point and articulare. In contrast, high eigenvalues associated with Partial warps 1-3, indicating localized shape changes, were found at tuberculum sellae, sella, and the frontonasomaxillary suture. It is concluded that large spatial-scale deformations affect the occipital complex of the cranial base and sphenoidal region, in combination with localized distortions at the frontonasal suture. These deformations may contribute to reduced orthocephalization or deficient flattening of the cranial base antero-posteriorly that, in turn, leads to the formation of a Class III malocclusion.
Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets
NASA Astrophysics Data System (ADS)
Juric, Mario
2011-01-01
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.
Measured acoustic characteristics of ducted supersonic jets at different model scales
NASA Technical Reports Server (NTRS)
Jones, R. R., III; Ahuja, K. K.; Tam, Christopher K. W.; Abdelwahab, M.
1993-01-01
A large-scale (about a 25x enlargement) model of the Georgia Tech Research Institute (GTRI) hardware was installed and tested in the Propulsion Systems Laboratory of the NASA Lewis Research Center. Acoustic measurements made in these two facilities are compared and the similarity in acoustic behavior over the scale range under consideration is highlighted. The study provide the acoustic data over a relatively large-scale range which may be used to demonstrate the validity of scaling methods employed in the investigation of this phenomena.
NASA Astrophysics Data System (ADS)
Massei, N.; Fournier, M.
2010-12-01
Daily Seine river flow from 1950 to 2008 was analyzed using Hilbert-Huang Tranform (HHT). For the last ten years, this method which combines the so-called Empirical Mode Decomposition (EMD) multiresolution analysis and the Hilbert transform has proven its efficiency for the analysis of transient oscillatory signals, although the mathematical definition of the EMD is not totally established yet. HHT also provides an interesting alternative to other time-frequency or time-scale analysis of non-stationary signals, the most famous of which being wavelet-based approaches. In this application of HHT to the analysis of the hydrological variability of the Seine river, we seek to characterize the interannual patterns of daily flow, differenciate them from the short-term dynamics and eventually interpret them in the context of regional climate regime fluctuations. In this aim, HHT is also applied to the North-Atlantic Oscillation (NAO) through the annual winter-months NAO index time series. For both hydrological and climatic signals, dominant variability scales are extracted and their temporal variations analyzed by determination of the intantaneous frequency of each component. When compared to previous ones obtained from continuous wavelet transform (CWT) on the same data, HHT results highlighted the same scales and somewhat the same internal components for each signal. However, HHT allowed the identification and extraction of much more similar features during the 1950-2008 period (e.g., around 7-yr, between NAO and Seine flow than what was obtained from CWT, which comes to say that variability scales in flow likely to originate from climatic regime fluctuations were much properly identified in river flow. In addition, a more accurate determination of singularities in the natural processes analyzed were authorized by HHT compared to CWT, in which case the time-frequency resolution partly depends on the basic properties of the filter (i.e., the reference wavelet chosen initially). Compared to CWT or even to discrete wavelet multiresolution analysis, HHT is auto-adaptive, non-parametric, allows an orthogonal decomposition of the signal analyzed and provides a more accurate estimation of changing variability scales across time for highly transient signals.
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
NASA Astrophysics Data System (ADS)
Lange, B. A.; Haas, C.; Beckers, J.; Hendricks, S.
2011-12-01
Satellite observations demonstrate a decreasing summer Arctic sea ice extent over the past ~40 years, as well as a smaller perennial sea ice zone, with a significantly accelerated decline in the last decade. Recent ice extent observations are significantly lower than predicted by any model employed by the Intergovernmental Panel on Climate Change. The disagreement of the modeled and observed results, along with the large variability of model results, can be in part attributed to a lack of consistent and long term sea ice mass balance observations for the High Arctic. This study presents the derivation of large scale (individual floe) seasonal sea ice mass balance in the Lincoln Sea and Nares Strait. Large scale melt estimates are derived by comparing aerial borne electromagnetic induction thickness surveys conducted in spring with surveys conducted in summer 2009. The comparison of coincident floes is ensured by tracking sea ice using ENIVSAT ASAR and MODIS satellite imagery. Only EM thickness survey sections of floes that were surveyed in both spring and summer are analyzed and the resulting modal thicknesses of the distributions, which represent the most abundant ice type, are compared to determine the difference in thickness and therefore total melt (snow+basal ice+surface ice melt). Preliminary analyses demonstrate a bulk (regional ice tracking) seasonal total thickness variability of 1.1m, Lincoln Sea modal thickness 3.7m (April, 2009) and Nares Strait modal thickness 2.6m (August 2009)(Fig1). More detailed floe tracking, in depth analysis of EM surveys and removal of deformed ridged/rafted sea ice (due to inaccuracies over deformed ice) will result in more accurate melt estimates for this region and will be presented. The physical structure of deformed sea ice and the footprint of the EM instrument typically underestimate the total thicknesses observed. Seasonal variations of sea ice properties can add additional uncertainty to the response of the EM instrument over deformed ridged/rafted sea ice. Here we will present additional analysis of the data comparing total thickness to ridge height that will provide some insight into the magnitude of seasonal discrepancies experienced by the EM instrument over deformed ice.
2007-12-21
2.4 Implementation of non-uniform gridsize The numerical method has been extended to allow non-uniform gridsizes in x and y direction, though the...and the vertical excursion of the swash motion A is expressed as 0.125 / 0 inaA sT g h π = . Figure 3 and 4 compare the XBeach results with the...A. Van Gent, A. J. H. M. Reniers, and D. J. R. Walstra (2008), Analysis of dune erosion processes in large scale flume experiments, submitted to
Guidelines for CubeSat's Thermal Design
NASA Technical Reports Server (NTRS)
Rodriguez-Ruiz, Juan; Patel, Deepak
2015-01-01
Thermal and Fluids Analysis Workshop 2015, Silver Spring, MD. NCTS 19104-15. What does it take to thermally designlow cost, low mass cubesats? What are the differences in the approach when you compare with large scale missions?What additional risk is acceptable? What is the approach to hardware? How is the testing campaign run? These aresome of the questions that will be addressed in this course, which is designed to equip the attendees to support thedevelopment of cubesats at their organization.
Transient and diffusion analysis of HgCdTe
NASA Technical Reports Server (NTRS)
Clayton, J. C.
1982-01-01
Solute redistribution during directional solidification of HgCdTe is addressed. Both one-dimensional and two-dimensional models for solute redistribution are treated and model results compared to experiment. The central problem studied is the cause of radial inhomogeneities found in directionally solidified HgCdTe. A large scale gravity-driven interface instability, termed shape instability, is postulated to be the cause of radial inhomogeneities. Recommendations for future work, along with appropriate computer programs, are included.
Large scale shell model study of nuclear spectroscopy in nuclei around 132Sn
NASA Astrophysics Data System (ADS)
Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.
2012-10-01
The properties of low-lying 2+ states in chains of nuclei in the proximity of the magic number N=82 are investigated within a new shell model approach exploiting an iterative algorithm alternative to Lanczos. The calculation yields levels and transition strengths in overall good agreement with experiments. The comparative analysis of the E2 and M1 transitions supports, in many cases, the scheme provided by the interacting boson model.
Numerical study of fire whirlwind taking into account radiative heat transfer
NASA Astrophysics Data System (ADS)
Sakai, S.; Miyagi, N.
2010-06-01
The fire whirlwind is a strong swirling flow with flame and spark, which may occur in the case of, widespread fire in the urban region by an earthquake disaster or an air raid, and a large-scale fire such as a forest fire. Fire whirlwind moves and promotes spread of fire and may extend serious damage rapidly. In this study, performing the numerical analysis of fire whirlwind with respect to scale effect, it is examined whether a relationship exists between a real phenomenon and the phenomenon in the reduction model with taking into account radiative heat transfer. Three dimensional analyses are performed to investigate the thermal and flow fields by using the analytical software FLUENT6.3. It is analyzed that those swirling flow in original scale, 1/10 scale, 1/50 scale, 1/100 scale from the original brake out to vanish. As an analytical condition, parameter calculation is repeated to get the velocity of a parallel flow which is the easiest to occur the swirling flow for each reduction model, and then scale effect is discussed by comparing the velocity of the natural convection, the velocity of the parallel flow, the center pressure of the whirlwind and the continuance time of the swirling flow. The analysis model of C-character heat source model is performed as well as the analysis in L-character model, which is one of the representative example of the fire whirlwind occurred at Tokyo in the Great Kanto Earthquake (1923). The result of the numerical analysis shows that there is a scale effect to the speed of the parallel flow to generate the swirling flow.
A Combined Ethical and Scientific Analysis of Large-scale Tests of Solar Climate Engineering
NASA Astrophysics Data System (ADS)
Ackerman, T. P.
2017-12-01
Our research group recently published an analysis of the combined ethical and scientific issues surrounding large-scale testing of stratospheric aerosol injection (SAI; Lenferna et al., 2017, Earth's Future). We are expanding this study in two directions. The first is extending this same analysis to other geoengineering techniques, particularly marine cloud brightening (MCB). MCB has substantial differences to SAI in this context because MCB can be tested over significantly smaller areas of the planet and, following injection, has a much shorter lifetime of weeks as opposed to years for SAI. We examine issues such as the role of intent, the lesser of two evils, and the nature of consent. In addition, several groups are currently considering climate engineering governance tools such as a code of ethics and a registry. We examine how these tools might influence climate engineering research programs and, specifically, large-scale testing. The second direction of expansion is asking whether ethical and scientific issues associated with large-scale testing are so significant that they effectively preclude moving ahead with climate engineering research and testing. Some previous authors have suggested that no research should take place until these issues are resolved. We think this position is too draconian and consider a more nuanced version of this argument. We note, however, that there are serious questions regarding the ability of the scientific research community to move to the point of carrying out large-scale tests.
XLinkDB 2.0: integrated, large-scale structural analysis of protein crosslinking data
Schweppe, Devin K.; Zheng, Chunxiang; Chavez, Juan D.; Navare, Arti T.; Wu, Xia; Eng, Jimmy K.; Bruce, James E.
2016-01-01
Motivation: Large-scale chemical cross-linking with mass spectrometry (XL-MS) analyses are quickly becoming a powerful means for high-throughput determination of protein structural information and protein–protein interactions. Recent studies have garnered thousands of cross-linked interactions, yet the field lacks an effective tool to compile experimental data or access the network and structural knowledge for these large scale analyses. We present XLinkDB 2.0 which integrates tools for network analysis, Protein Databank queries, modeling of predicted protein structures and modeling of docked protein structures. The novel, integrated approach of XLinkDB 2.0 enables the holistic analysis of XL-MS protein interaction data without limitation to the cross-linker or analytical system used for the analysis. Availability and Implementation: XLinkDB 2.0 can be found here, including documentation and help: http://xlinkdb.gs.washington.edu/. Contact: jimbruce@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153666
Architectural Optimization of Digital Libraries
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1998-01-01
This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.
Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R
2015-11-03
Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Bassam, S.; Ren, J.
2015-12-01
Runoff generated during heavy rainfall imposes quick, but often intense, changes in the flow of streams, which increase the chance of flash floods in the vicinity of the streams. Understanding the temporal response of streams to heavy rainfall requires a hydrological model that considers meteorological, hydrological, and geological components of the streams and their watersheds. SWAT is a physically-based, semi-distributed model that is capable of simulating water flow within watersheds with both long-term, i.e. annually and monthly, and short-term (daily and sub-daily) time scales. However, the capability of SWAT in sub-daily water flow modeling within large watersheds has not been studied much, compare to long-term and daily time scales. In this study we are investigating the water flow in a large, semi-arid watershed, Nueces River Basin (NRB) with the drainage area of 16950 mi2 located in South Texas, with daily and sub-daily time scales. The objectives of this study are: (1) simulating the response of streams to heavy, and often quick, rainfall, (2) evaluating SWAT performance in sub-daily modeling of water flow within a large watershed, and (3) examining means for model performance improvement during model calibration and verification based on results of sensitivity and uncertainty analysis. The results of this study can provide important information for water resources planning during flood seasons.
NASA Astrophysics Data System (ADS)
Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha
2018-06-01
Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.
NASA Astrophysics Data System (ADS)
He, Xiangming; Li, Jianjun; Cheng, Hongwei; Jiang, Changyin; Wan, Chunrong
A novel synthesis of controlled crystallization and granulation was attempted to prepare nano-scale β-Ni(OH) 2 cathode materials for high power Ni-MH batteries. Nano-scale β-Ni(OH) 2 and Co(OH) 2 with a diameter of 20 nm were prepared by controlled crystallization, mixed by ball milling, and granulated to form about 5 μm spherical grains by spray drying granulation. Both the addition of nano-scale Co(OH) 2 and granulation significantly enhanced electrochemical performance of nano-scale Ni(OH) 2. The XRD and TEM analysis shown that there were a large amount of defects among the crystal lattice of as-prepared nano-scale Ni(OH) 2, and the DTA-TG analysis shown that it had both lower decomposition temperature and higher decomposition reaction rate, indicating less thermal stability, as compared with conventional micro-scale Ni(OH) 2, and indicating that it had higher electrochemical performance. The granulated grains of nano-scale Ni(OH) 2 mixed with nano-scale Co(OH) 2 at Co/Ni = 1/20 presented the highest specific capacity reaching its theoretical value of 289 mAh g -1 at 1 C, and also exhibited much improved electrochemical performance at high discharge capacity rate up to 10 C. The granulated grains of nano-scale β-Ni(OH) 2 mixed with nano-scale Co(OH) 2 is a promising cathode active material for high power Ni-MH batteries.
Combined process automation for large-scale EEG analysis.
Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E
2012-01-01
Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cai, Long-Fei; Zhu, Ying; Du, Guan-Sheng; Fang, Qun
2012-01-03
We described a microfluidic chip-based system capable of generating droplet array with a large scale concentration gradient by coupling flow injection gradient technique with droplet-based microfluidics. Multiple modules including sample injection, sample dispersion, gradient generation, droplet formation, mixing of sample and reagents, and online reaction within the droplets were integrated into the microchip. In the system, nanoliter-scale sample solution was automatically injected into the chip under valveless flow injection analysis mode. The sample zone was first dispersed in the microchannel to form a concentration gradient along the axial direction of the microchannel and then segmented into a linear array of droplets by immiscible oil phase. With the segmentation and protection of the oil phase, the concentration gradient profile of the sample was preserved in the droplet array with high fidelity. With a single injection of 16 nL of sample solution, an array of droplets with concentration gradient spanning 3-4 orders of magnitude could be generated. The present system was applied in the enzyme inhibition assay of β-galactosidase to preliminarily demonstrate its potential in high throughput drug screening. With a single injection of 16 nL of inhibitor solution, more than 240 in-droplet enzyme inhibition reactions with different inhibitor concentrations could be performed with an analysis time of 2.5 min. Compared with multiwell plate-based screening systems, the inhibitor consumption was reduced 1000-fold. © 2011 American Chemical Society
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang
2013-01-01
Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...
Analysis of Large Scale Spatial Variability of Soil Moisture Using a Geostatistical Method
2010-01-25
2010 / Accepted: 19 January 2010 / Published: 25 January 2010 Abstract: Spatial and temporal soil moisture dynamics are critically needed to...scale observed and simulated estimates of soil moisture under pre- and post-precipitation event conditions. This large scale variability is a crucial... dynamics is essential in the hydrological and meteorological modeling, improves our understanding of land surface–atmosphere interactions. Spatial and
Tropical Cyclone Information System
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Knosp, Brian W.; Vu, Quoc A.; Yi, Chao; Hristova-Veleva, Svetla M.
2009-01-01
The JPL Tropical Cyclone Infor ma tion System (TCIS) is a Web portal (http://tropicalcyclone.jpl.nasa.gov) that provides researchers with an extensive set of observed hurricane parameters together with large-scale and convection resolving model outputs. It provides a comprehensive set of high-resolution satellite (see figure), airborne, and in-situ observations in both image and data formats. Large-scale datasets depict the surrounding environmental parameters such as SST (Sea Surface Temperature) and aerosol loading. Model outputs and analysis tools are provided to evaluate model performance and compare observations from different platforms. The system pertains to the thermodynamic and microphysical structure of the storm, the air-sea interaction processes, and the larger-scale environment as depicted by ocean heat content and the aerosol loading of the environment. Currently, the TCIS is populated with satellite observations of all tropical cyclones observed globally during 2005. There is a plan to extend the database both forward in time till present as well as backward to 1998. The portal is powered by a MySQL database and an Apache/Tomcat Web server on a Linux system. The interactive graphic user interface is provided by Google Map.
Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel
NASA Technical Reports Server (NTRS)
Wang, John T.; Lotts, Christine G.; Sleight, David W.
1999-01-01
This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less
Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo
2008-01-01
Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.
Large-scale brain networks are distinctly affected in right and left mesial temporal lobe epilepsy.
de Campos, Brunno Machado; Coan, Ana Carolina; Lin Yasuda, Clarissa; Casseb, Raphael Fernandes; Cendes, Fernando
2016-09-01
Mesial temporal lobe epilepsy (MTLE) with hippocampus sclerosis (HS) is associated with functional and structural alterations extending beyond the temporal regions and abnormal pattern of brain resting state networks (RSNs) connectivity. We hypothesized that the interaction of large-scale RSNs is differently affected in patients with right- and left-MTLE with HS compared to controls. We aimed to determine and characterize these alterations through the analysis of 12 RSNs, functionally parceled in 70 regions of interest (ROIs), from resting-state functional-MRIs of 99 subjects (52 controls, 26 right- and 21 left-MTLE patients with HS). Image preprocessing and statistical analysis were performed using UF(2) C-toolbox, which provided ROI-wise results for intranetwork and internetwork connectivity. Intranetwork abnormalities were observed in the dorsal default mode network (DMN) in both groups of patients and in the posterior salience network in right-MTLE. Both groups showed abnormal correlation between the dorsal-DMN and the posterior salience, as well as between the dorsal-DMN and the executive-control network. Patients with left-MTLE also showed reduced correlation between the dorsal-DMN and visuospatial network and increased correlation between bilateral thalamus and the posterior salience network. The ipsilateral hippocampus stood out as a central area of abnormalities. Alterations on left-MTLE expressed a low cluster coefficient, whereas the altered connections on right-MTLE showed low cluster coefficient in the DMN but high in the posterior salience regions. Both right- and left-MTLE patients with HS have widespread abnormal interactions of large-scale brain networks; however, all parameters evaluated indicate that left-MTLE has a more intricate bihemispheric dysfunction compared to right-MTLE. Hum Brain Mapp 37:3137-3152, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Haugum, Mona; Danielsen, Kirsten; Iversen, Hilde Hestad; Bjertnaes, Oyvind
2014-12-01
An important goal for national and large-scale surveys of user experiences is quality improvement. However, large-scale surveys are normally conducted by a professional external surveyor, creating an institutionalized division between the measurement of user experiences and the quality work that is performed locally. The aim of this study was to identify and describe scientific studies related to the use of national and large-scale surveys of user experiences in local quality work. Ovid EMBASE, Ovid MEDLINE, Ovid PsycINFO and the Cochrane Database of Systematic Reviews. Scientific publications about user experiences and satisfaction about the extent to which data from national and other large-scale user experience surveys are used for local quality work in the health services. Themes of interest were identified and a narrative analysis was undertaken. Thirteen publications were included, all differed substantially in several characteristics. The results show that large-scale surveys of user experiences are used in local quality work. The types of follow-up activity varied considerably from conducting a follow-up analysis of user experience survey data to information sharing and more-systematic efforts to use the data as a basis for improving the quality of care. This review shows that large-scale surveys of user experiences are used in local quality work. However, there is a need for more, better and standardized research in this field. The considerable variation in follow-up activities points to the need for systematic guidance on how to use data in local quality work. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Manifestations of dynamo driven large-scale magnetic field in accretion disks of compact objects
NASA Technical Reports Server (NTRS)
Chagelishvili, G. D.; Chanishvili, R. G.; Lominadze, J. G.; Sokhadze, Z. A.
1991-01-01
A turbulent dynamo nonlinear theory of turbulence was developed that shows that in the compact objects of accretion disks, the generated large-scale magnetic field (when the generation takes place) has a practically toroidal configuration. Its energy density can be much higher than turbulent pulsations energy density, and it becomes comparable with the thermal energy density of the medium. On this basis, the manifestations to which the large-scale magnetic field can lead at the accretion onto black holes and gravimagnetic rotators, respectively, are presented.
Human, vector and parasite Hsp90 proteins: A comparative bioinformatics analysis.
Faya, Ngonidzashe; Penkler, David L; Tastan Bishop, Özlem
2015-01-01
The treatment of protozoan parasitic diseases is challenging, and thus identification and analysis of new drug targets is important. Parasites survive within host organisms, and some need intermediate hosts to complete their life cycle. Changing host environment puts stress on parasites, and often adaptation is accompanied by the expression of large amounts of heat shock proteins (Hsps). Among Hsps, Hsp90 proteins play an important role in stress environments. Yet, there has been little computational research on Hsp90 proteins to analyze them comparatively as potential parasitic drug targets. Here, an attempt was made to gain detailed insights into the differences between host, vector and parasitic Hsp90 proteins by large-scale bioinformatics analysis. A total of 104 Hsp90 sequences were divided into three groups based on their cellular localizations; namely cytosolic, mitochondrial and endoplasmic reticulum (ER). Further, the parasitic proteins were divided according to the type of parasite (protozoa, helminth and ectoparasite). Primary sequence analysis, phylogenetic tree calculations, motif analysis and physicochemical properties of Hsp90 proteins suggested that despite the overall structural conservation of these proteins, parasitic Hsp90 proteins have unique features which differentiate them from human ones, thus encouraging the idea that protozoan Hsp90 proteins should be further analyzed as potential drug targets.
Parallel computing for probabilistic fatigue analysis
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.
1993-01-01
This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.
Global-scale patterns of forest fragmentation
Kurt H. Riitters; James D. Wickham; R. O' Neill; B. Jones; E. Smith
2000-01-01
We report an analysis of forest fragmentation based on 1-km resolution land-cover maps for the globe. Measurements in analysis windows from 81 km 2 (9 x 9 pixels, "small" scale) to 59,049 km 2 (243 x 243 pixels, "large" scale) were used to characterize the fragmentation around each forested pixel. We identified six categories of fragmentation (...
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...
2015-06-19
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less
Problems and Solutions in Evaluating Child Outcomes of Large-Scale Educational Programs.
ERIC Educational Resources Information Center
Abrams, Allan S.; And Others
1979-01-01
Evaluation of large-scale programs is problematical because of inherent bias in assignment of treatment and control groups, resulting in serious regression artifacts even with the use of analysis of covariance designs. Nonuniformity of program implementation across sites and classrooms is also a problem. (Author/GSK)
The Diversity of School Organizational Configurations
ERIC Educational Resources Information Center
Lee, Linda C.
2013-01-01
School reform on a large scale has largely been unsuccessful. Approaches designed to document and understand the variety of organizational conditions that comprise our school systems are needed so that reforms can be tailored and results scaled. Therefore, this article develops a configurational framework that allows a systematic analysis of many…
GrigoraSNPs: Optimized Analysis of SNPs for DNA Forensics.
Ricke, Darrell O; Shcherbina, Anna; Michaleas, Adam; Fremont-Smith, Philip
2018-04-16
High-throughput sequencing (HTS) of single nucleotide polymorphisms (SNPs) enables additional DNA forensic capabilities not attainable using traditional STR panels. However, the inclusion of sets of loci selected for mixture analysis, extended kinship, phenotype, biogeographic ancestry prediction, etc., can result in large panel sizes that are difficult to analyze in a rapid fashion. GrigoraSNP was developed to address the allele-calling bottleneck that was encountered when analyzing SNP panels with more than 5000 loci using HTS. GrigoraSNPs uses a MapReduce parallel data processing on multiple computational threads plus a novel locus-identification hashing strategy leveraging target sequence tags. This tool optimizes the SNP calling module of the DNA analysis pipeline with runtimes that scale linearly with the number of HTS reads. Results are compared with SNP analysis pipelines implemented with SAMtools and GATK. GrigoraSNPs removes a computational bottleneck for processing forensic samples with large HTS SNP panels. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
NASA Technical Reports Server (NTRS)
Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.
1998-01-01
We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrana, Alexandra; Johnson, Matthew C.; Harris, Mary-Jean, E-mail: aterrana@perimeterinstitute.ca, E-mail: mharris8@perimeterinstitute.ca, E-mail: mjohnson@perimeterinstitute.ca
Due to cosmic variance we cannot learn any more about large-scale inhomogeneities from the primary cosmic microwave background (CMB) alone. More information on large scales is essential for resolving large angular scale anomalies in the CMB. Here we consider cross correlating the large-scale kinetic Sunyaev Zel'dovich (kSZ) effect and probes of large-scale structure, a technique known as kSZ tomography. The statistically anisotropic component of the cross correlation encodes the CMB dipole as seen by free electrons throughout the observable Universe, providing information about long wavelength inhomogeneities. We compute the large angular scale power asymmetry, constructing the appropriate transfer functions, andmore » estimate the cosmic variance limited signal to noise for a variety of redshift bin configurations. The signal to noise is significant over a large range of power multipoles and numbers of bins. We present a simple mode counting argument indicating that kSZ tomography can be used to estimate more modes than the primary CMB on comparable scales. A basic forecast indicates that a first detection could be made with next-generation CMB experiments and galaxy surveys. This paper motivates a more systematic investigation of how close to the cosmic variance limit it will be possible to get with future observations.« less
Acoustic Measurements for Small Solid Rocket Motors
NASA Technical Reports Server (NTRS)
Vargas, Magda B.; Kenny, R. Jeremy
2010-01-01
Models have been developed to predict large solid rocket motor acoustic loads based on the scaling of small solid rocket motors. MSFC has measured several small solid rocket motors in horizontal and launch configurations to anchor these models. Solid Rocket Test Motor (SRTM) has ballistics similar to the Reusable Solid Rocket Motor (RSRM) therefore a good choice for acoustic scaling. Acoustic measurements were collected during the test firing of the Insulation Configuration Extended Length (ICXL) 7,6, and 8 (in firing order) in order to compare to RSRM horizontal firing data. The scope of this presentation includes: Acoustic test procedures and instrumentation implemented during the three SRTM firings and Data analysis method and general trends observed in the data.
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
NASA Astrophysics Data System (ADS)
Tang, Zhanqi; Jiang, Nan
2018-05-01
This study reports the modifications of scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted circular cylinder. Hot-wire measurements were executed at multiple streamwise and wall-normal wise locations downstream of the cylindrical element. The streamwise fluctuating signals were decomposed into large-, small-, and dissipative-scale signatures by corresponding cutoff filters. The scale interaction under the cylindrical perturbation was elaborated by comparing the small- and dissipative-scale amplitude/frequency modulation effects downstream of the cylinder element with the results observed in the unperturbed case. It was obtained that the large-scale fluctuations perform a stronger amplitude modulation on both the small and dissipative scales in the near-wall region. At the wall-normal positions of the cylinder height, the small-scale amplitude modulation coefficients are redistributed by the cylinder wake. The similar observation was noted in small-scale frequency modulation; however, the dissipative-scale frequency modulation seems to be independent of the cylindrical perturbation. The phase-relationship observation indicated that the cylindrical perturbation shortens the time shifts between both the small- and dissipative-scale variations (amplitude and frequency) and large-scale fluctuations. Then, the integral time scale dependence of the phase-relationship between the small/dissipative scales and large scales was also discussed. Furthermore, the discrepancy of small- and dissipative-scale time shifts relative to the large-scale motions was examined, which indicates that the small-scale amplitude/frequency leads the dissipative scales.
ERIC Educational Resources Information Center
Cizek, Gregory J.
2009-01-01
Reliability and validity are two characteristics that must be considered whenever information about student achievement is collected. However, those characteristics--and the methods for evaluating them--differ in large-scale testing and classroom testing contexts. This article presents the distinctions between reliability and validity in the two…
Topology Analysis of the Sloan Digital Sky Survey. I. Scale and Luminosity Dependence
NASA Astrophysics Data System (ADS)
Park, Changbom; Choi, Yun-Young; Vogeley, Michael S.; Gott, J. Richard, III; Kim, Juhan; Hikage, Chiaki; Matsubara, Takahiko; Park, Myeong-Gu; Suto, Yasushi; Weinberg, David H.; SDSS Collaboration
2005-11-01
We measure the topology of volume-limited galaxy samples selected from a parent sample of 314,050 galaxies in the Sloan Digital Sky Survey (SDSS), which is now complete enough to describe the fully three-dimensional topology and its dependence on galaxy properties. We compare the observed genus statistic G(νf) to predictions for a Gaussian random field and to the genus measured for mock surveys constructed from new large-volume simulations of the ΛCDM cosmology. In this analysis we carefully examine the dependence of the observed genus statistic on the Gaussian smoothing scale RG from 3.5 to 11 h-1 Mpc and on the luminosity of galaxies over the range -22.50
NASA Astrophysics Data System (ADS)
Blackman, Eric G.; Subramanian, Kandaswamy
2013-02-01
The extent to which large-scale magnetic fields are susceptible to turbulent diffusion is important for interpreting the need for in situ large-scale dynamos in astrophysics and for observationally inferring field strengths compared to kinetic energy. By solving coupled evolution equations for magnetic energy and magnetic helicity in a system initialized with isotropic turbulence and an arbitrarily helical large-scale field, we quantify the decay rate of the latter for a bounded or periodic system. The magnetic energy associated with the non-helical large-scale field decays at least as fast as the kinematically estimated turbulent diffusion rate, but the decay rate of the helical part depends on whether the ratio of its magnetic energy to the turbulent kinetic energy exceeds a critical value given by M1, c = (k1/k2)2, where k1 and k2 are the wavenumbers of the large and forcing scales. Turbulently diffusing helical fields to small scales while conserving magnetic helicity requires a rapid increase in total magnetic energy. As such, only when the helical field is subcritical can it so diffuse. When supercritical, it decays slowly, at a rate determined by microphysical dissipation even in the presence of macroscopic turbulence. In effect, turbulent diffusion of such a large-scale helical field produces small-scale helicity whose amplification abates further turbulent diffusion. Two curious implications are that (1) standard arguments supporting the need for in situ large-scale dynamos based on the otherwise rapid turbulent diffusion of large-scale fields require re-thinking since only the large-scale non-helical field is so diffused in a closed system. Boundary terms could however provide potential pathways for rapid change of the large-scale helical field. (2) Since M1, c ≪ 1 for k1 ≪ k2, the presence of long-lived ordered large-scale helical fields as in extragalactic jets do not guarantee that the magnetic field dominates the kinetic energy.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Large-scale coupling dynamics of instructed reversal learning.
Mohr, Holger; Wolfensteller, Uta; Ruge, Hannes
2018-02-15
The ability to rapidly learn from others by instruction is an important characteristic of human cognition. A recent study found that the rapid transfer from initial instructions to fluid behavior is supported by changes of functional connectivity between and within several large-scale brain networks, and particularly by the coupling of the dorsal attention network (DAN) with the cingulo-opercular network (CON). In the present study, we extended this approach to investigate how these brain networks interact when stimulus-response mappings are altered by novel instructions. We hypothesized that residual stimulus-response associations from initial practice might negatively impact the ability to implement novel instructions. Using functional imaging and large-scale connectivity analysis, we found that functional coupling between the CON and DAN was generally at a higher level during initial than reversal learning. Examining the learning-related connectivity dynamics between the CON and DAN in more detail by means of multivariate patterns analyses, we identified a specific subset of connections which showed a particularly high increase in connectivity during initial learning compared to reversal learning. This finding suggests that the CON-DAN connections can be separated into two functionally dissociable yet spatially intertwined subsystems supporting different aspects of short-term task automatization. Copyright © 2017 Elsevier Inc. All rights reserved.
Infraslow Electroencephalographic and Dynamic Resting State Network Activity.
Grooms, Joshua K; Thompson, Garth J; Pan, Wen-Ju; Billings, Jacob; Schumacher, Eric H; Epstein, Charles M; Keilholz, Shella D
2017-06-01
A number of studies have linked the blood oxygenation level dependent (BOLD) signal to electroencephalographic (EEG) signals in traditional frequency bands (δ, θ, α, β, and γ), but the relationship between BOLD and its direct frequency correlates in the infraslow band (<1 Hz) has been little studied. Previously, work in rodents showed that infraslow local field potentials play a role in functional connectivity, particularly in the dynamic organization of large-scale networks. To examine the relationship between infraslow activity and network dynamics in humans, direct current (DC) EEG and resting state magnetic resonance imaging data were acquired simultaneously. The DC EEG signals were correlated with the BOLD signal in patterns that resembled resting state networks. Subsequent dynamic analysis showed that the correlation between DC EEG and the BOLD signal varied substantially over time, even within individual subjects. The variation in DC EEG appears to reflect the time-varying contribution of different resting state networks. Furthermore, some of the patterns of DC EEG and BOLD correlation are consistent with previous work demonstrating quasiperiodic spatiotemporal patterns of large-scale network activity in resting state. These findings demonstrate that infraslow electrical activity is linked to BOLD fluctuations in humans and that it may provide a basis for large-scale organization comparable to that observed in animal studies.
Infraslow Electroencephalographic and Dynamic Resting State Network Activity
Grooms, Joshua K.; Thompson, Garth J.; Pan, Wen-Ju; Billings, Jacob; Schumacher, Eric H.; Epstein, Charles M.
2017-01-01
Abstract A number of studies have linked the blood oxygenation level dependent (BOLD) signal to electroencephalographic (EEG) signals in traditional frequency bands (δ, θ, α, β, and γ), but the relationship between BOLD and its direct frequency correlates in the infraslow band (<1 Hz) has been little studied. Previously, work in rodents showed that infraslow local field potentials play a role in functional connectivity, particularly in the dynamic organization of large-scale networks. To examine the relationship between infraslow activity and network dynamics in humans, direct current (DC) EEG and resting state magnetic resonance imaging data were acquired simultaneously. The DC EEG signals were correlated with the BOLD signal in patterns that resembled resting state networks. Subsequent dynamic analysis showed that the correlation between DC EEG and the BOLD signal varied substantially over time, even within individual subjects. The variation in DC EEG appears to reflect the time-varying contribution of different resting state networks. Furthermore, some of the patterns of DC EEG and BOLD correlation are consistent with previous work demonstrating quasiperiodic spatiotemporal patterns of large-scale network activity in resting state. These findings demonstrate that infraslow electrical activity is linked to BOLD fluctuations in humans and that it may provide a basis for large-scale organization comparable to that observed in animal studies. PMID:28462586
A phylogeny and revised classification of Squamata, including 4161 species of lizards and snakes
2013-01-01
Background The extant squamates (>9400 known species of lizards and snakes) are one of the most diverse and conspicuous radiations of terrestrial vertebrates, but no studies have attempted to reconstruct a phylogeny for the group with large-scale taxon sampling. Such an estimate is invaluable for comparative evolutionary studies, and to address their classification. Here, we present the first large-scale phylogenetic estimate for Squamata. Results The estimated phylogeny contains 4161 species, representing all currently recognized families and subfamilies. The analysis is based on up to 12896 base pairs of sequence data per species (average = 2497 bp) from 12 genes, including seven nuclear loci (BDNF, c-mos, NT3, PDC, R35, RAG-1, and RAG-2), and five mitochondrial genes (12S, 16S, cytochrome b, ND2, and ND4). The tree provides important confirmation for recent estimates of higher-level squamate phylogeny based on molecular data (but with more limited taxon sampling), estimates that are very different from previous morphology-based hypotheses. The tree also includes many relationships that differ from previous molecular estimates and many that differ from traditional taxonomy. Conclusions We present a new large-scale phylogeny of squamate reptiles that should be a valuable resource for future comparative studies. We also present a revised classification of squamates at the family and subfamily level to bring the taxonomy more in line with the new phylogenetic hypothesis. This classification includes new, resurrected, and modified subfamilies within gymnophthalmid and scincid lizards, and boid, colubrid, and lamprophiid snakes. PMID:23627680
Large-scale weakly supervised object localization via latent category learning.
Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve
2015-04-01
Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.
Naghdi, Mohammad Reza; Smail, Katia; Wang, Joy X; Wade, Fallou; Breaker, Ronald R; Perreault, Jonathan
2017-03-15
The discovery of noncoding RNAs (ncRNAs) and their importance for gene regulation led us to develop bioinformatics tools to pursue the discovery of novel ncRNAs. Finding ncRNAs de novo is challenging, first due to the difficulty of retrieving large numbers of sequences for given gene activities, and second due to exponential demands on calculation needed for comparative genomics on a large scale. Recently, several tools for the prediction of conserved RNA secondary structure were developed, but many of them are not designed to uncover new ncRNAs, or are too slow for conducting analyses on a large scale. Here we present various approaches using the database RiboGap as a primary tool for finding known ncRNAs and for uncovering simple sequence motifs with regulatory roles. This database also can be used to easily extract intergenic sequences of eubacteria and archaea to find conserved RNA structures upstream of given genes. We also show how to extend analysis further to choose the best candidate ncRNAs for experimental validation. Copyright © 2017 Elsevier Inc. All rights reserved.
Motion of an elastic capsule in a square microfluidic channel.
Kuriakose, S; Dimitrakopoulos, P
2011-07-01
In the present study we investigate computationally the steady-state motion of an elastic capsule along the centerline of a square microfluidic channel and compare it with that in a cylindrical tube. In particular, we consider a slightly over-inflated elastic capsule made of a strain-hardening membrane with comparable shearing and area-dilatation resistance. Under the conditions studied in this paper (i.e., small, moderate, and large capsules at low and moderate flow rates), the capsule motion in a square channel is similar to and thus governed by the same scaling laws with the capsule motion in a cylindrical tube, even though in the channel the cross section in the upstream portion of large capsules is nonaxisymmetric (i.e., square-like with rounded corners). When the hydrodynamic forces on the membrane increase, the capsule develops a pointed downstream edge and a flattened rear (possibly with a negative curvature) so that the restoring tension forces are increased as also happens with droplets. Membrane tensions increase significantly with the capsule size while the area near the downstream tip is the most probable to rupture when a capsule flows in a microchannel. Because the membrane tensions increase with the interfacial deformation, a suitable Landau-Levich-Derjaguin-Bretherton analysis reveals that the lubrication film thickness h for large capsules depends on both the capillary number Ca and the capsule size a; our computations determine the latter dependence to be (in dimensionless form) h ~ a(-2) for the large capsules studied in this work. For small and moderate capsule sizes a, the capsule velocity Ux and additional pressure drop ΔP+ are governed by the same scaling laws as for high-viscosity droplets. The velocity and additional pressure drop of large thick capsules also follow the dynamics of high-viscosity droplets, and are affected by the lubrication film thickness. The motion of our large thick capsules is characterized by a Ux-U ~ h ~ a(-2) approach to the undisturbed average duct velocity and an additional pressure drop ΔP+ ~a(3)/h ~ a(5). By combining basic physical principles and geometric properties, we develop a theoretical analysis that explains the power laws we found for large capsules.
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
Oborn, Ingrid; Modin-Edman, Anna-Karin; Bengtsson, Helena; Gustafson, Gunnela M; Salomon, Eva; Nilsson, S Ingvar; Holmqvist, Johan; Jonsson, Simon; Sverdrup, Harald
2005-06-01
A systems analysis approach was used to assess farmscale nutrient and trace element sustainability by combining full-scale field experiments with specific studies of nutrient release from mineral weathering and trace-element cycling. At the Ojebyn dairy farm in northern Sweden, a farm-scale case study including phosphorus (P), potassium (K), and zinc (Zn) was run to compare organic and conventional agricultural management practices. By combining different element-balance approaches (at farmgate, barn, and field scales) and further adapting these to the FARMFLOW model, we were able to combine mass flows and pools within the subsystems and establish links between subsystems in order to make farm-scale predictions. It was found that internal element flows on the farm are large and that there are farm internal sources (Zn) and loss terms (K). The approaches developed and tested at the Ojebyn farm are promising and considered generally adaptable to any farm.
The cross-correlation between 3D cosmic shear and the integrated Sachs-Wolfe effect
NASA Astrophysics Data System (ADS)
Zieser, Britta; Merkel, Philipp M.
2016-06-01
We present the first calculation of the cross-correlation between 3D cosmic shear and the integrated Sachs-Wolfe (iSW) effect. Both signals are combined in a single formalism, which permits the computation of the full covariance matrix. In order to avoid the uncertainties presented by the non-linear evolution of the matter power spectrum and intrinsic alignments of galaxies, our analysis is restricted to large scales, I.e. multipoles below ℓ = 1000. We demonstrate in a Fisher analysis that this reduction compared to other studies of 3D weak lensing extending to smaller scales is compensated by the information that is gained if the additional iSW signal and in particular its cross-correlation with lensing data are considered. Given the observational standards of upcoming weak-lensing surveys like Euclid, marginal errors on cosmological parameters decrease by 10 per cent compared to a cosmic shear experiment if both types of information are combined without a cosmic wave background (CMB) prior. Once the constraining power of CMB data is added, the improvement becomes marginal.
Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette
2013-06-01
High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spa therapy and balneotherapy for treating low back pain: meta-analysis of randomized trials.
Pittler, M H; Karagülle, M Z; Karagülle, M; Ernst, E
2006-07-01
Low back pain is a major public health concern and complementary treatments are frequently used for this condition. The objective of this systematic review and meta-analysis was to assess the evidence for or against the effectiveness of spa therapy and balneotherapy for treating low back pain. Systematic searches were conducted on Medline, Embase, Amed Cochrane Central, the UK National Research Register and ClincalTrials.gov (all until July 2005). Hand searches were performed and experts contacted. Methodological quality was assessed using a standard scale. Five randomized clinical trials met all inclusion criteria. Quantitative data synthesis was performed. The data for spa therapy, assessed on a 100 mm visual analogue scale (VAS), suggest significant beneficial effects compared with waiting list control groups (weighted mean difference 26.6 mm, 95% confidence interval 20.4-32.8, n=442) for patients with chronic low back pain. For balneotherapy the data, assessed on a 100 mm VAS, also suggest beneficial effects compared with control groups (weighted mean difference 18.8 mm, 95% confidence interval 10.3-27.3, n=138). Even though the data are scarce, there is encouraging evidence suggesting that spa therapy and balneotherapy may be effective for treating patients with low back pain. These data are not compelling but warrant rigorous large-scale trials.
Magnetic Rayleigh-Taylor instability in radiative flows
NASA Astrophysics Data System (ADS)
Yaghoobi, Asiyeh; Shadmehri, Mohsen
2018-06-01
We present a linear analysis of the radiative Rayleigh-Taylor (RT) instability in the presence of magnetic field for both optically thin and thick regimes. When the flow is optically thin, magnetic field not only stabilizes perturbations with short wavelengths, but also growth rate of the instability at long wavelengths is reduced compared to a non-magnetized case. Then, we extend our analysis to the optically thick flows with a conserved total specific entropy, and properties of the unstable perturbations are investigated in detail. Growth rate of the instability at short wavelengths is suppressed due to the presence of the magnetic field; however, growth rate is nearly constant at long wavelengths because of the radiation field. Since the radiative bubbles around massive protostars are subject to the RT instability, we also explore implications of our results in this context. In the non-magnetized case, the growth time-scale of the instability for a typical bubble is found to be less than 1000 yr, which is very short compared to the typical star formation time-scale. Magnetic field with a reasonable strength significantly increases the growth time-scale to more than hundreds of thousand years. The instability, furthermore, is more efficient at large wavelengths, whereas in the non-magnetized case, growth rate at short wavelengths is more significant.
Transient analysis of 1D inhomogeneous media by dynamic inhomogeneous finite element method
NASA Astrophysics Data System (ADS)
Yang, Zailin; Wang, Yao; Hei, Baoping
2013-12-01
The dynamic inhomogeneous finite element method is studied for use in the transient analysis of onedimensional inhomogeneous media. The general formula of the inhomogeneous consistent mass matrix is established based on the shape function. In order to research the advantages of this method, it is compared with the general finite element method. A linear bar element is chosen for the discretization tests of material parameters with two fictitious distributions. And, a numerical example is solved to observe the differences in the results between these two methods. Some characteristics of the dynamic inhomogeneous finite element method that demonstrate its advantages are obtained through comparison with the general finite element method. It is found that the method can be used to solve elastic wave motion problems with a large element scale and a large number of iteration steps.
Mid-latitude response to geomagnetic storms observed in 630nm airglow over continental United States
NASA Astrophysics Data System (ADS)
Bhatt, A.; Kendall, E. A.
2016-12-01
We present analysis of mid-latitude response observed to geomagnetic storms using the MANGO network consisting of all-sky cameras imaging 630nm emission over the continental United States. The response largely falls in two categories: Stable Auroral Red (SAR) arc and Large-scale traveling ionospheric disturbances (LSTIDs). However, outside of these phenomena, less often observed response include anomalous airglow brightening, bright swirls, and frozen in traveling structures. We will present an analysis of various events observed over 3 years of MANGO network operation, which started with two imagers in the western US with addition of new imagers in the last year. We will also present unusual north and northeastward propagating waves often observed in conjunction with diffuse aurora. Wherever possible, we will compare with observations from Boston University imagers located in Massachusetts and Texas.
Cloud computing for genomic data analysis and collaboration.
Langmead, Ben; Nellore, Abhinav
2018-04-01
Next-generation sequencing has made major strides in the past decade. Studies based on large sequencing data sets are growing in number, and public archives for raw sequencing data have been doubling in size every 18 months. Leveraging these data requires researchers to use large-scale computational resources. Cloud computing, a model whereby users rent computers and storage from large data centres, is a solution that is gaining traction in genomics research. Here, we describe how cloud computing is used in genomics for research and large-scale collaborations, and argue that its elasticity, reproducibility and privacy features make it ideally suited for the large-scale reanalysis of publicly available archived data, including privacy-protected data.
A reduced basis method for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Vincent-Finley, Rachel Elisabeth
In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.
Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*
Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.
2015-01-01
Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363
Testing gravity using large-scale redshift-space distortions
NASA Astrophysics Data System (ADS)
Raccanelli, Alvise; Bertacca, Daniele; Pietrobon, Davide; Schmidt, Fabian; Samushia, Lado; Bartolo, Nicola; Doré, Olivier; Matarrese, Sabino; Percival, Will J.
2013-11-01
We use luminous red galaxies from the Sloan Digital Sky Survey (SDSS) II to test the cosmological structure growth in two alternatives to the standard Λ cold dark matter (ΛCDM)+general relativity (GR) cosmological model. We compare observed three-dimensional clustering in SDSS Data Release 7 (DR7) with theoretical predictions for the standard vanilla ΛCDM+GR model, unified dark matter (UDM) cosmologies and the normal branch Dvali-Gabadadze-Porrati (nDGP). In computing the expected correlations in UDM cosmologies, we derive a parametrized formula for the growth factor in these models. For our analysis we apply the methodology tested in Raccanelli et al. and use the measurements of Samushia et al. that account for survey geometry, non-linear and wide-angle effects and the distribution of pair orientation. We show that the estimate of the growth rate is potentially degenerate with wide-angle effects, meaning that extremely accurate measurements of the growth rate on large scales will need to take such effects into account. We use measurements of the zeroth and second-order moments of the correlation function from SDSS DR7 data and the Large Suite of Dark Matter Simulations (LasDamas), and perform a likelihood analysis to constrain the parameters of the models. Using information on the clustering up to rmax = 120 h-1 Mpc, and after marginalizing over the bias, we find, for UDM models, a speed of sound c∞ ≤ 6.1e-4, and, for the nDGP model, a cross-over scale rc ≥ 340 Mpc, at 95 per cent confidence level.
Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.
Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L
2015-02-01
Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Li, Lun; Long, Yan; Zhang, Libin; Dalton-Morgan, Jessica; Batley, Jacqueline; Yu, Longjiang; Meng, Jinling; Li, Maoteng
2015-01-01
The prediction of the flowering time (FT) trait in Brassica napus based on genome-wide markers and the detection of underlying genetic factors is important not only for oilseed producers around the world but also for the other crop industry in the rotation system in China. In previous studies the low density and mixture of biomarkers used obstructed genomic selection in B. napus and comprehensive mapping of FT related loci. In this study, a high-density genome-wide SNP set was genotyped from a double-haploid population of B. napus. We first performed genomic prediction of FT traits in B. napus using SNPs across the genome under ten environments of three geographic regions via eight existing genomic predictive models. The results showed that all the models achieved comparably high accuracies, verifying the feasibility of genomic prediction in B. napus. Next, we performed a large-scale mapping of FT related loci among three regions, and found 437 associated SNPs, some of which represented known FT genes, such as AP1 and PHYE. The genes tagged by the associated SNPs were enriched in biological processes involved in the formation of flowers. Epistasis analysis showed that significant interactions were found between detected loci, even among some known FT related genes. All the results showed that our large scale and high-density genotype data are of great practical and scientific values for B. napus. To our best knowledge, this is the first evaluation of genomic selection models in B. napus based on a high-density SNP dataset and large-scale mapping of FT loci.
Zhang, Qiang; Ge, Heng’an; Zhou, Jiaojiao; Cheng, Biao
2013-01-01
Background Platelet-rich products (PRP) are widely used for rotator cuff tears. However, whether platelet-rich products produce superior clinical or radiological outcomes is controversial. This study aims to use meta-analysis to compare clinical and radiological outcomes between groups with or without platelet-rich products. Methods The Pubmed, Embase, and Cochrane library databases were searched for relevant studies published before April 20, 2013. Studies were selected that clearly reported a comparison between the use or not of platelet-rich products. The Constant, ASES, UCLA, and SST scale systems and the rotator cuff retear rate were evaluated. The weighted mean differences and relative risks were calculated using a fixed-effects model. Results Seven studies were enrolled in this meta-analysis. No significant differences were found for the Constant scale (0.73, 95% CI, −1.82 to 3.27, P = 0.58), ASES scale (−2.89, 95% CI, −6.31 to 0.53, P = 0.1), UCLA scale (−0.79, 95% CI, −2.20 to 0.63, P = 0.28), SST scale (0.34, 95% CI, −0.01 to 0.69, P = 0.05), and the overall rotator cuff retear rate (0.71, 95% CI, 0.48 to 1.05, P = 0.08). Subgroup analysis according to the initial tear size showed a lower retear rate in small- and medium-sized tears (0.33, 95% CI, 0.12 to 0.91, P = 0.03) after platelet-rich product application but no difference for large- and massive-sized tears (0.86, 95% CI, 0.60 to 1.23, P = 0.42). Conclusion In conclusion, the meta-analysis suggests that the platelet-rich products have no benefits on the overall clinical outcomes and retear rate for the arthroscopic repair of full-thickness rotator cuff tears. However, a decrease occurred in the rate of retears among patients treated with PRP for small- and medium-sized rotator cuff tears but not for large- and massive-sized tears. Level of Evidence Level II PMID:23874991
Large-Scale medical image analytics: Recent methodologies, applications and Future directions.
Zhang, Shaoting; Metaxas, Dimitris
2016-10-01
Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Ngan, Henry Y. T.; Yung, Nelson H. C.; Yeh, Anthony G. O.
2015-02-01
This paper aims at presenting a comparative study of outlier detection (OD) for large-scale traffic data. The traffic data nowadays are massive in scale and collected in every second throughout any modern city. In this research, the traffic flow dynamic is collected from one of the busiest 4-armed junction in Hong Kong in a 31-day sampling period (with 764,027 vehicles in total). The traffic flow dynamic is expressed in a high dimension spatial-temporal (ST) signal format (i.e. 80 cycles) which has a high degree of similarities among the same signal and across different signals in one direction. A total of 19 traffic directions are identified in this junction and lots of ST signals are collected in the 31-day period (i.e. 874 signals). In order to reduce its dimension, the ST signals are firstly undergone a principal component analysis (PCA) to represent as (x,y)-coordinates. Then, these PCA (x,y)-coordinates are assumed to be conformed as Gaussian distributed. With this assumption, the data points are further to be evaluated by (a) a correlation study with three variant coefficients, (b) one-class support vector machine (SVM) and (c) kernel density estimation (KDE). The correlation study could not give any explicit OD result while the one-class SVM and KDE provide average 59.61% and 95.20% DSRs, respectively.
Validating Large Scale Networks Using Temporary Local Scale Networks
USDA-ARS?s Scientific Manuscript database
The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...
NASA Astrophysics Data System (ADS)
Moya, José; Grandjean, Gilles; Copons, Ramon; Vaunat, Jean; Buxó, Pere; Colas, Bastien; Darrozes, José; Gasc, Muriel; Guinau, Marta; Gutiérrez, Francisco; García, Juan Carlos; Virely, Didier; Crosetto, Michele; Mas, Raül
2017-04-01
Large landslides are recognised as one of the main erosional agents in mountain ranges, having a significant influence on landscape evolution. However, few efforts have been carried out to assess their geomorphological impact from a regional perspective. Regional-scale investigations are also necessary for the reliable evaluation of the associated risks (i.e. for land-use planning). Large landslides are common in the Pyrenees but: 1) their geographic distribution on a regional scale is not well known; 2) their geological and geomorphological controlling factors have been only studied preliminarily; and 3) their state of activity and stability conditions are unknown for most of the cases. Regional analyses of large landslides, as those carried out by Crosta et al. (2013) in the Alps, are rare worldwide. Jarman et al. (2014) conducted a very preliminary analysis in a sector of the Pyrenees. The construction of a cartographic inventory constitutes the basics for such type of studies, which are typically hindered by the lack of cross-border landslide data bases and methodologies. The aim of this contribution is to present the preliminary works carried out for constructing a harmonized inventory of large landslides in the Pyrenees, involving for the first time both sides of the cordillera and the main groups working in landslide risk in France, Spain and Andorra. Methods used for landslide hazard and risk analysis have been compiled and compared, showing a significant divergence, even as regards the terminology. A preliminary cross-border inventory sheet on risk of large landslides has been prepared. It includes specific fields for the assessment of landslide activity (by using complimentary methods such as morpho-stratigraphy, morphometric analysis and remote techniques) and indirect potential costs (that typically overcome direct ones), which usually are neglected in the existing data bases. Crosta, G.B., Frattini, P. and Agliardi, F., 2013. Deep seated gravitational slope deformations in the European Alps. Tectonophysics, 605, 13-33. Jarman, D., Calvet, M., Corominas, J., Delmas, M. and Gunnell, Y., 2014. Large-Scale Rock Slope Failures in the Eastern Pyrenees: Identifying a sparse but significant population in paraglacial and parafluvial contexts. Geografiska Annaler (Series A, Physical Geography), 96: 357-391. Acknowlegdement This work has been done by the PyrMove research network. This network acknowledges the scholarship granted by the Governments of Andorra, Catalonia and Occitanie in the framework of the research grants of Working Community of the Pyrenees ACTP024-AND/2014, AGAUR 2014CTP00051 and Occitanie (ex. Midi-Pyrénées Region) deliberation n°14/12/12.05.
Hydrogen Safety Issues Compared to Safety Issues with Methane and Propane
NASA Astrophysics Data System (ADS)
Green, M. A.
2006-04-01
The hydrogen economy is not possible if the safety standards currently applied to liquid hydrogen and hydrogen gas by many laboratories are applied to devices that use either liquid or gaseous hydrogen. Methane and propane are commonly used by ordinary people without the special training. This report asks, "How is hydrogen different from flammable gasses that are commonly being used all over the world?" This report compares the properties of hydrogen, methane and propane and how these properties may relate to safety when they are used in both the liquid and gaseous state. Through such an analysis, sensible safety standards for the large-scale (or even small-scale) use of liquid and gaseous hydrogen systems can be developed. This paper is meant to promote discussion of issues related to hydrogen safety so that engineers designing equipment can factor sensible safety standards into their designs.
Hydrogen Safety Issues Compared to Safety Issues with Methane andPropane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Michael A.
The hydrogen economy is not possible if the safety standards currently applied to liquid hydrogen and hydrogen gas by many laboratories are applied to devices that use either liquid or gaseous hydrogen. Methane and propane are commonly used by ordinary people without the special training. This report asks, 'How is hydrogen different from flammable gasses that are commonly being used all over the world?' This report compares the properties of hydrogen, methane and propane and how these properties may relate to safety when they are used in both the liquid and gaseous state. Through such an analysis, sensible safety standardsmore » for the large-scale (or even small-scale) use of liquid and gaseous hydrogen systems can be developed. This paper is meant to promote discussion of issues related to hydrogen safety so that engineers designing equipment can factor sensible safety standards into their designs.« less
Analyzing the Validity of Relationship Banking through Agent-based Modeling
NASA Astrophysics Data System (ADS)
Nishikido, Yukihito; Takahashi, Hiroshi
This article analyzes the validity of relationship banking through agent-based modeling. In the analysis, we especially focus on the relationship between economic conditions and both lenders' and borrowers' behaviors. As a result of intensive experiments, we made the following interesting findings: (1) Relationship banking contributes to reducing bad loan; (2) relationship banking is more effective in enhancing the market growth compared to transaction banking, when borrowers' sales scale is large; (3) keener competition among lenders may bring inefficiency to the market.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.
Serum Biochemical Phenotypes in the Domestic Dog
Chang, Yu-Mei; Hadox, Erin; Szladovits, Balazs; Garden, Oliver A.
2016-01-01
The serum or plasma biochemical profile is essential in the diagnosis and monitoring of systemic disease in veterinary medicine, but current reference intervals typically take no account of breed-specific differences. Breed-specific hematological phenotypes have been documented in the domestic dog, but little has been published on serum biochemical phenotypes in this species. Serum biochemical profiles of dogs in which all measurements fell within the existing reference intervals were retrieved from a large veterinary database. Serum biochemical profiles from 3045 dogs were retrieved, of which 1495 had an accompanying normal glucose concentration. Sixty pure breeds plus a mixed breed control group were represented by at least 10 individuals. All analytes, except for sodium, chloride and glucose, showed variation with age. Total protein, globulin, potassium, chloride, creatinine, cholesterol, total bilirubin, ALT, CK, amylase, and lipase varied between sexes. Neutering status significantly impacted all analytes except albumin, sodium, calcium, urea, and glucose. Principal component analysis of serum biochemical data revealed 36 pure breeds with distinctive phenotypes. Furthermore, comparative analysis identified 23 breeds with significant differences from the mixed breed group in all biochemical analytes except urea and glucose. Eighteen breeds were identified by both principal component and comparative analysis. Tentative reference intervals were generated for breeds with a distinctive phenotype identified by comparative analysis and represented by at least 120 individuals. This is the first large-scale analysis of breed-specific serum biochemical phenotypes in the domestic dog and highlights potential genetic components of biochemical traits in this species. PMID:26919479
NASA Astrophysics Data System (ADS)
Federrath, C.; Roman-Duval, J.; Klessen, R. S.; Schmidt, W.; Mac Low, M.-M.
2010-03-01
Context. Density and velocity fluctuations on virtually all scales observed with modern telescopes show that molecular clouds (MCs) are turbulent. The forcing and structural characteristics of this turbulence are, however, still poorly understood. Aims: To shed light on this subject, we study two limiting cases of turbulence forcing in numerical experiments: solenoidal (divergence-free) forcing and compressive (curl-free) forcing, and compare our results to observations. Methods: We solve the equations of hydrodynamics on grids with up to 10243 cells for purely solenoidal and purely compressive forcing. Eleven lower-resolution models with different forcing mixtures are also analysed. Results: Using Fourier spectra and Δ-variance, we find velocity dispersion-size relations consistent with observations and independent numerical simulations, irrespective of the type of forcing. However, compressive forcing yields stronger compression at the same rms Mach number than solenoidal forcing, resulting in a three times larger standard deviation of volumetric and column density probability distributions (PDFs). We compare our results to different characterisations of several observed regions, and find evidence of different forcing functions. Column density PDFs in the Perseus MC suggest the presence of a mainly compressive forcing agent within a shell, driven by a massive star. Although the PDFs are close to log-normal, they have non-Gaussian skewness and kurtosis caused by intermittency. Centroid velocity increments measured in the Polaris Flare on intermediate scales agree with solenoidal forcing on that scale. However, Δ-variance analysis of the column density in the Polaris Flare suggests that turbulence is driven on large scales, with a significant compressive component on the forcing scale. This indicates that, although likely driven with mostly compressive modes on large scales, turbulence can behave like solenoidal turbulence on smaller scales. Principal component analysis of G216-2.5 and most of the Rosette MC agree with solenoidal forcing, but the interior of an ionised shell within the Rosette MC displays clear signatures of compressive forcing. Conclusions: The strong dependence of the density PDF on the type of forcing must be taken into account in any theory using the PDF to predict properties of star formation. We supply a quantitative description of this dependence. We find that different observed regions show evidence of different mixtures of compressive and solenoidal forcing, with more compressive forcing occurring primarily in swept-up shells. Finally, we emphasise the role of the sonic scale for protostellar core formation, because core formation close to the sonic scale would naturally explain the observed subsonic velocity dispersions of protostellar cores. A movie is only available in electronic form at http://www.aanda.org
Zhao, Shanrong; Prenger, Kurt; Smith, Lance
2013-01-01
RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948
Griessenauer, Christoph J; Medin, Caroline; Maingard, Julian; Chandra, Ronil V; Ng, Wyatt; Brooks, Duncan Mark; Asadi, Hamed; Killer-Oberpfalzer, Monika; Schirmer, Clemens M; Moore, Justin M; Ogilvy, Christopher S; Thomas, Ajith J; Phan, Kevin
2018-02-01
Mechanical thrombectomy has become the standard of care for management of most large vessel occlusion (LVO) strokes. When patients with LVO present with minor stroke symptomatology, no consensus on the role of mechanical thrombectomy exists. A systematic review and meta-analysis were performed to identify studies that focused on mechanical thrombectomy, either as a standalone treatment or with intravenous tissue plasminogen activator (IV tPA), in patients with mild strokes with LVO, defined as a baseline National Institutes of Health Stroke Scale score ≤5 at presentation. Data on methodology, quality criteria, and outcome measures were extracted, and outcomes were compared using odds ratio as a summary statistic. Five studies met the selection criteria and were included. When compared with medical therapy without IV tPA, mechanical thrombectomy and medical therapy with IV tPA were associated with improved 90-day modified Rankin Scale (mRS) score. Among medical patients who were not eligible for IV tPA, those who underwent mechanical thrombectomy were more likely to experience good 90-day mRS than those who were not. There was no significant difference in functional outcome between mechanical thrombectomy and medical therapy with IV tPA, and no treatment subgroup was associated with intracranial hemorrhage or death. In patients with mild strokes due to LVO, mechanical thrombectomy and medical therapy with IV tPA led to better 90-day functional outcome. Mechanical thrombectomy plays an important role in the management of these patients, particularly in those not eligible for IV tPA. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhao, Shanrong; Prenger, Kurt; Smith, Lance
2013-01-01
RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.
Torsional Oscillations in a Global Solar Dynamo
NASA Astrophysics Data System (ADS)
Beaudoin, P.; Charbonneau, P.; Racine, E.; Smolarkiewicz, P. K.
2013-02-01
We characterize and analyze rotational torsional oscillations developing in a large-eddy magnetohydrodynamical simulation of solar convection (Ghizaru, Charbonneau, and Smolarkiewicz, Astrophys. J. Lett. 715, L133, 2010; Racine et al., Astrophys. J. 735, 46, 2011) producing an axisymmetric, large-scale, magnetic field undergoing periodic polarity reversals. Motivated by the many solar-like features exhibited by these oscillations, we carry out an analysis of the large-scale zonal dynamics. We demonstrate that simulated torsional oscillations are not driven primarily by the periodically varying large-scale magnetic torque, as one might have expected, but rather via the magnetic modulation of angular-momentum transport by the large-scale meridional flow. This result is confirmed by a straightforward energy analysis. We also detect a fairly sharp transition in rotational dynamics taking place as one moves from the base of the convecting layers to the base of the thin tachocline-like shear layer formed in the stably stratified fluid layers immediately below. We conclude by discussing the implications of our analyses with regard to the mechanism of amplitude saturation in the global dynamo operating in the simulation, and speculate on the possible precursor value of torsional oscillations for the forecast of solar-cycle characteristics.
Sharma, Hitt J; Patil, Vishwanath D; Lalwani, Sanjay K; Manglani, Mamta V; Ravichandran, Latha; Kapre, Subhash V; Jadhav, Suresh S; Parekh, Sameer S; Ashtagi, Girija; Malshe, Nandini; Palkar, Sonali; Wade, Minal; Arunprasath, T K; Kumar, Dinesh; Shewale, Sunil D
2012-01-11
Hib vaccine can be easily incorporated in EPI vaccination schedule as the immunization schedule of Hib is similar to that of DTP vaccine. To meet the global demand of Hib vaccine, SIIL scaled up the Hib conjugate manufacturing process. This study was conducted in Indian infants to assess and compare the immunogenicity and safety of DTwP-HB+Hib (Pentavac(®)) vaccine of SIIL manufactured at large scale with the 'same vaccine' manufactured at a smaller scale. 720 infants aged 6-8 weeks were randomized (2:1 ratio) to receive 0.5 ml of Pentavac(®) vaccine from two different lots one produced at scaled up process and the other at a small scale process. Serum samples obtained before and at one month after the 3rd dose of vaccine from both the groups were tested for IgG antibody response by ELISA and compared to assess non-inferiority. Neither immunological interference nor increased reactogenicity was observed in either of the vaccine groups. All infants developed protective antibody titres to diphtheria, tetanus and Hib disease. For hepatitis B antigen, one child from each group remained sero-negative. The response to pertussis was 88% in large scale group vis-à-vis 87% in small scale group. Non-inferiority was concluded for all five components of the vaccine. No serious adverse event was reported in the study. The scale up vaccine achieved comparable response in terms of the safety and immunogenicity to small scale vaccine and therefore can be easily incorporated in the routine childhood vaccination programme. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sendzimir, Jan; Dubel, Anna; Linnerooth-Bayer, Joanne; Damurski, Jakub; Schroeter, Dagmar
2014-05-01
Historically large reservoirs have been the dominant strategy to counter flood and drought risk in Europe. However, a number of smaller-scale approaches have emerged as alternative strategies. To compare the cost effectiveness of reservoirs and these alternatives, we calculated the Investment & maintenance costs in terms of (euros) /m3 water stored or annual runoff reduced for five different strategies: large reservoirs (1.68 euros), large on-farm ponds (5.88 euros), small on-farm ponds (558.00 euros), shelterbelts (6.86 euros), switching to conservation tillage (-9.20 euros). The most cost effective measure for reducing runoff is switching to conservation tillage practices because this switch reduces machinery and labor costs in addition to reducing water runoff. Although shelterbelts that reduce annual runoff cannot be directly compared to ponds and reservoirs that store water, our estimates show that they likely compare favorably as a natural water retention measure, especially when taking account of their co-benefits in terms of erosion control, biodiversity and pollination. Another useful result is our demonstration of the economies of scale among reservoirs and ponds for storing water. Small ponds are two orders of magnitude more costly to construct and maintain as a flood and drought prevention measure than large reservoirs. Here, again, there are large co-benefits that should be factored into the cost-benefit equation, including especially the value of small ponds in promoting corridors for migration. This analysis shows the importance of carrying out more extensive cost-benefit estimates across on-farm and off-farm measures for tackling drought and flood risk in the context of a changing climate. While concrete recommendations for supporting water retention measures will depend on a more detailed investigation of their costs and benefits, this research highlights the potential of natural water retention measures as a complement to conventional investments in large reservoirs.
Large Scale Analysis of Geospatial Data with Dask and XArray
NASA Astrophysics Data System (ADS)
Zender, C. S.; Hamman, J.; Abernathey, R.; Evans, K. J.; Rocklin, M.; Zender, C. S.; Rocklin, M.
2017-12-01
The analysis of geospatial data with high level languages has acceleratedinnovation and the impact of existing data resources. However, as datasetsgrow beyond single-machine memory, data structures within these high levellanguages can become a bottleneck. New libraries like Dask and XArray resolve some of these scalability issues,providing interactive workflows that are both familiar tohigh-level-language researchers while also scaling out to much largerdatasets. This broadens the access of researchers to larger datasets on highperformance computers and, through interactive development, reducestime-to-insight when compared to traditional parallel programming techniques(MPI). This talk describes Dask, a distributed dynamic task scheduler, Dask.array, amulti-dimensional array that copies the popular NumPy interface, and XArray,a library that wraps NumPy/Dask.array with labeled and indexes axes,implementing the CF conventions. We discuss both the basic design of theselibraries and how they change interactive analysis of geospatial data, and alsorecent benefits and challenges of distributed computing on clusters ofmachines.
When micro meets macro: microbial lipid analysis and ecosystem ecology
NASA Astrophysics Data System (ADS)
Balser, T.; Gutknecht, J.
2008-12-01
There is growing interest in linking soil microbial community composition and activity with large-scale field studies of nutrient cycling or plant community response to disturbances. And while analysis of microbial communities has moved rapidly in the past decade from culture-based to non-culture based techniques, still it must be asked what have we gained from the move? How well does the necessarily micro-scale of microbial analysis allow us to address questions of interest at the macro-scale? Several challenges exist in bridging the scales, and foremost is the question of methodological feasibility. Past microbiological methodologies have not been readily adaptable to the large sample sizes necessary for ecosystem-scale research. As a result, it has been difficult to generate compatible microbial and ecosystem data sets. We describe the use of a modified lipid extraction method to generate microbial community data sets that allow us to match landscape-scale or long-term ecological studies with microbial community data. We briefly discuss the challenges and advantages associated with lipid analysis as an approach to addressing ecosystem ecological studies, and provide examples from our research in ecosystem restoration and recovery following disturbance and climate change.
Large-Scale Coronal Heating from the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.
1999-01-01
In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
A relationship between porosity and permeability of carbonate rock reservoirs
NASA Astrophysics Data System (ADS)
Lee, J.; Park, Y.; Jo, Y.; Jeong, J.; Eom, S.
2009-12-01
Most of oil reservoirs in the world occur in carbonate rocks. Thus, characterization of the carbonate reservoirs, including understanding the correlation between porosity and permeability is essentially required to enhance oil recovery. Compared with the other sedimentary rocks such as sandstone and shale, the carbonate rocks would exhibit a wide variety of vertical and horizontal heterogeneities. In general, pores of the carbonate rocks can be affected by mineral dissolution, replacement by other minerals and re-crystallization, which are the post-depositional processes. Permeability has been estimated at a wide scale by thin section image analysis, rock core experiments, geophysical well logging data and large scale aquifer tests. For the same porosity, the permeability might show a wide variation. In this study, a large number of the porosity and the permeability data pairs for world wide carbonate rocks (reservoirs) were collected from many literatures. The porosity and permeability data were grouped according to test scale, the reservoir location and the rock types. As is already known, the relation showed a rather scattered distribution also in this study, not monotonous, which indicates that higher porosity does not mean higher permeability of the rock formation. This study provides the analysis results and implications for oil production of the carbonate reservoirs. This research was funded by Energy Efficiency and Resources Program of KETEP (Korea Institute of Energy Technology Evaluation and Planning), Grant No. 2009T100200058.
Brans, Kristien I; Govaert, Lynn; Engelen, Jessie M T; Gianuca, Andros T; Souffreau, Caroline; De Meester, Luc
2017-01-19
Urbanization causes both changes in community composition and evolutionary responses, but most studies focus on these responses in isolation. We performed an integrated analysis assessing the relative contribution of intra- and interspecific trait turnover to the observed change in zooplankton community body size in 83 cladoceran communities along urbanization gradients quantified at seven spatial scales (50-3200 m radii). We also performed a quantitative genetic analysis on 12 Daphnia magna populations along the same urbanization gradient. Body size in zooplankton communities generally declined with increasing urbanization, but the opposite was observed for communities dominated by large species. The contribution of intraspecific trait variation to community body size turnover with urbanization strongly varied with the spatial scale considered, and was highest for communities dominated by large cladoceran species and at intermediate spatial scales. Genotypic size at maturity was smaller for urban than for rural D. magna populations and for animals cultured at 24°C compared with 20°C. While local genetic adaptation likely contributed to the persistence of D. magna in the urban heat islands, buffering for the phenotypic shift to larger body sizes with increasing urbanization, community body size turnover was mainly driven by non-genetic intraspecific trait change.This article is part of the themed issue 'Human influences on evolution, and the ecological and societal consequences'. © 2016 The Author(s).
Souffreau, Caroline
2017-01-01
Urbanization causes both changes in community composition and evolutionary responses, but most studies focus on these responses in isolation. We performed an integrated analysis assessing the relative contribution of intra- and interspecific trait turnover to the observed change in zooplankton community body size in 83 cladoceran communities along urbanization gradients quantified at seven spatial scales (50–3200 m radii). We also performed a quantitative genetic analysis on 12 Daphnia magna populations along the same urbanization gradient. Body size in zooplankton communities generally declined with increasing urbanization, but the opposite was observed for communities dominated by large species. The contribution of intraspecific trait variation to community body size turnover with urbanization strongly varied with the spatial scale considered, and was highest for communities dominated by large cladoceran species and at intermediate spatial scales. Genotypic size at maturity was smaller for urban than for rural D. magna populations and for animals cultured at 24°C compared with 20°C. While local genetic adaptation likely contributed to the persistence of D. magna in the urban heat islands, buffering for the phenotypic shift to larger body sizes with increasing urbanization, community body size turnover was mainly driven by non-genetic intraspecific trait change. This article is part of the themed issue ‘Human influences on evolution, and the ecological and societal consequences’. PMID:27920375
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G
2017-04-07
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.
2017-01-01
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351
Collective phase response curves for heterogeneous coupled oscillators
NASA Astrophysics Data System (ADS)
Hannay, Kevin M.; Booth, Victoria; Forger, Daniel B.
2015-08-01
Phase response curves (PRCs) have become an indispensable tool in understanding the entrainment and synchronization of biological oscillators. However, biological oscillators are often found in large coupled heterogeneous systems and the variable of physiological importance is the collective rhythm resulting from an aggregation of the individual oscillations. To study this phenomena we consider phase resetting of the collective rhythm for large ensembles of globally coupled Sakaguchi-Kuramoto oscillators. Making use of Ott-Antonsen theory we derive an asymptotically valid analytic formula for the collective PRC. A result of this analysis is a characteristic scaling for the change in the amplitude and entrainment points for the collective PRC compared to the individual oscillator PRC. We support the analytical findings with numerical evidence and demonstrate the applicability of the theory to large ensembles of coupled neuronal oscillators.
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
NASA Astrophysics Data System (ADS)
Fischer, P. D.; Brown, M. E.; Trumbo, S. K.; Hand, K. P.
2017-01-01
We present spatially resolved spectroscopic observations of Europa’s surface at 3-4 μm obtained with the near-infrared spectrograph and adaptive optics system on the Keck II telescope. These are the highest quality spatially resolved reflectance spectra of Europa’s surface at 3-4 μm. The observations spatially resolve Europa’s large-scale compositional units at a resolution of several hundred kilometers. The spectra show distinct features and geographic variations associated with known compositional units; in particular, large-scale leading hemisphere chaos shows a characteristic longward shift in peak reflectance near 3.7 μm compared to icy regions. These observations complement previous spectra of large-scale chaos, and can aid efforts to identify the endogenous non-ice species.
He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi
2015-11-01
A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Labat, David; Jourde, Hervé; Lecoq, Nicolas; Mazzilli, Naomi
2017-04-01
The french karst observatory network SNO KARST is a national initiative from the National Institute for Earth Sciences and Astronomy (INSU) of the National Center for Scientific Research (CNRS). It is also part of the new french research infrastructure for the observation of the critical zone OZCAR. SNO KARST is composed by several karst sites distributed over conterminous France which are located in different physiographic and climatic contexts (Mediterranean, Pyrenean, Jura mountain, western and northwestern shore near the Atlantic or the English Channel). This allows the scientific community to develop advanced research and experiments dedicated to improve understanding of the hydrological functioning of karst catchments. Here we used several sites of SNO KARST in order to assess the hydrological response of karst catchments to long-term variation of large-scale atmospheric circulation. Using NCEP reanalysis products and karst discharge, we analyzed the links between large-scale circulation and karst water resources variability. As karst hydrosystems are highly heterogeneous media, they behave differently across different time-scales : we explore the large-scale/local-scale relationships according to time-scales using a wavelet multiresolution approach of both karst hydrological variables and large-scale climate fields such as sea level pressure (SLP). The different wavelet components of karst discharge in response to the corresponding wavelet component of climate fields are either 1) compared to physico-chemical/geochemical responses at karst springs, or 2) interpreted in terms of hydrological functioning by comparing discharge wavelet components to internal components obtained from precipitation/discharge models using the KARSTMOD conceptual modeling platform of SNO KARST.
Successful scaling-up of self-sustained pyrolysis of oil palm biomass under pool-type reactor.
Idris, Juferi; Shirai, Yoshihito; Andou, Yoshito; Mohd Ali, Ahmad Amiruddin; Othman, Mohd Ridzuan; Ibrahim, Izzudin; Yamamoto, Akio; Yasuda, Nobuhiko; Hassan, Mohd Ali
2016-02-01
An appropriate technology for waste utilisation, especially for a large amount of abundant pressed-shredded oil palm empty fruit bunch (OFEFB), is important for the oil palm industry. Self-sustained pyrolysis, whereby oil palm biomass was combusted by itself to provide the heat for pyrolysis without an electrical heater, is more preferable owing to its simplicity, ease of operation and low energy requirement. In this study, biochar production under self-sustained pyrolysis of oil palm biomass in the form of oil palm empty fruit bunch was tested in a 3-t large-scale pool-type reactor. During the pyrolysis process, the biomass was loaded layer by layer when the smoke appeared on the top, to minimise the entrance of oxygen. This method had significantly increased the yield of biochar. In our previous report, we have tested on a 30-kg pilot-scale capacity under self-sustained pyrolysis and found that the higher heating value (HHV) obtained was 22.6-24.7 MJ kg(-1) with a 23.5%-25.0% yield. In this scaled-up study, a 3-t large-scale procedure produced HHV of 22.0-24.3 MJ kg(-1) with a 30%-34% yield based on a wet-weight basis. The maximum self-sustained pyrolysis temperature for the large-scale procedure can reach between 600 °C and 700 °C. We concluded that large-scale biochar production under self-sustained pyrolysis was successfully conducted owing to the comparable biochar produced, compared with medium-scale and other studies with an electrical heating element, making it an appropriate technology for waste utilisation, particularly for the oil palm industry. © The Author(s) 2015.
Initial sequencing and comparative analysis of the mouse genome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waterston, Robert H.; Lindblad-Toh, Kerstin; Birney, Ewan
2002-12-15
The sequence of the mouse genome is a key informational tool for understanding the contents of the human genome and a key experimental tool for biomedical research. Here, we report the results of an international collaboration to produce a high-quality draft sequence of the mouse genome. We also present an initial comparative analysis of the mouse and human genomes, describing some of the insights that can be gleaned from the two sequences. We discuss topics including the analysis of the evolutionary forces shaping the size, structure and sequence of the genomes; the conservation of large-scale synteny across most of themore » genomes; the much lower extent of sequence orthology covering less than half of the genomes; the proportions of the genomes under selection; the number of protein-coding genes; the expansion of gene families related to reproduction and immunity; the evolution of proteins; and the identification of intraspecies polymorphism.« less
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
Mohamed Yusoff, Aini; Tan, Tze King; Hari, Ranjeev; Koepfli, Klaus-Peter; Wee, Wei Yee; Antunes, Agostinho; Sitam, Frankie Thomas; Rovie-Ryan, Jeffrine Japning; Karuppannan, Kayal Vizi; Wong, Guat Jah; Lipovich, Leonard; Warren, Wesley C.; O’Brien, Stephen J.; Choo, Siew Woh
2016-01-01
Pangolins are scale-covered mammals, containing eight endangered species. Maintaining pangolins in captivity is a significant challenge, in part because little is known about their genetics. Here we provide the first large-scale sequencing of the critically endangered Manis javanica transcriptomes from eight different organs using Illumina HiSeq technology, yielding ~75 Giga bases and 89,754 unigenes. We found some unigenes involved in the insect hormone biosynthesis pathway and also 747 lipids metabolism-related unigenes that may be insightful to understand the lipid metabolism system in pangolins. Comparative analysis between M. javanica and other mammals revealed many pangolin-specific genes significantly over-represented in stress-related processes, cell proliferation and external stimulus, probably reflecting the traits and adaptations of the analyzed pregnant female M. javanica. Our study provides an invaluable resource for future functional works that may be highly relevant for the conservation of pangolins. PMID:27618997
Rogers, R; Sewell, K W; Morey, L C; Ustad, K L
1996-12-01
Psychological assessment with multiscale inventories is largely dependent on the honesty and forthrightness of those persons evaluated. We investigated the effectiveness of the Personality Assessment Inventory (PAI) in detecting participants feigning three specific disorders: schizophrenia, major depression, and generalized anxiety disorder. With a simulation design, we tested the PAI validity scales on 166 naive (undergraduates with minimal preparation) and 80 sophisticated (doctoral psychology students with 1 week preparation) participants. We compared their results to persons with the designated disorders: schizophrenia (n = 45), major depression (n = 136), and generalized anxiety disorder (n = 40). Although moderately effective with naive simulators, the validity scales evidenced only modest positive predictive power with their sophisticated counterparts. Therefore, we performed a two-stage discriminant analysis that yielded a moderately high hit rate (> 80%) that was maintained in the cross-validation sample, irrespective of the feigned disorder or the sophistication of the simulators.
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
Reconsidering Replication: New Perspectives on Large-Scale School Improvement
ERIC Educational Resources Information Center
Peurach, Donald J.; Glazer, Joshua L.
2012-01-01
The purpose of this analysis is to reconsider organizational replication as a strategy for large-scale school improvement: a strategy that features a "hub" organization collaborating with "outlet" schools to enact school-wide designs for improvement. To do so, we synthesize a leading line of research on commercial replication to construct a…
USDA-ARS?s Scientific Manuscript database
Vegetative cover can be quantified quickly and consistently and often at lower cost with image analysis of color digital images than with visual assessments. Image-based mapping of vegetative cover for large-scale research and management decisions can now be considered with the accuracy of these met...
Large scale fire whirls: Can their formation be predicted?
J. Forthofer; Bret Butler
2010-01-01
Large scale fire whirls have not traditionally been recognized as a frequent phenomenon on wildland fires. However, there are anecdotal data suggesting that they can and do occur with some regularity. This paper presents a brief summary of this information and an analysis of the causal factors leading to their formation.
Multiresolution persistent homology for excessively large biomolecular datasets
NASA Astrophysics Data System (ADS)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-10-01
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.
Simulation of fatigue crack growth under large scale yielding conditions
NASA Astrophysics Data System (ADS)
Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann
2010-07-01
A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.
Comparative studies of groundwater vulnerability assessment
NASA Astrophysics Data System (ADS)
Maria, Rizka
2018-02-01
Pollution of groundwater is a primary issue because aquifers are susceptible to contamination from land use and anthropogenic impacts. Groundwater susceptibility is intrinsic and specific. Intrinsic vulnerability refers to an aquifer that is susceptible to pollution and to the geological and hydrogeological features. Vulnerability assessment is an essential step in assessing groundwater contamination. This approach provides a visual analysis for helping planners and decision makers to achieve the sustainable management of water resources. Comparative studies are applying different methodologies to result in the basic evaluation of the groundwater vulnerability. Based on the comparison of methods, there are several advantages and disadvantages. SI can be overlaid on DRASTIC and Pesticide DRASTIC to extract the divergence in sensitivity. DRASTIC identifies low susceptibility and underestimates the pollution risk while Pesticide DRASTIC and SI represents better risk and is recommended for the future. SINTACS method generates very high vulnerability zones with surface waters and aquifer interactions. GOD method could be adequate for vulnerability mapping in karstified carbonate aquifers at small-moderate scales, and EPIK method can be used for large scale. GOD method is suitable for designing large area such as land management while DRASTIC has good accuracy and more real use in geoenvironmental detailed studies.
Comparing Shock geometry from MHD simulation to that from the Q/A-scaling analysis
NASA Astrophysics Data System (ADS)
Li, G.; Zhao, L.; Jin, M.
2017-12-01
In large SEP events, ions can be accelerated at CME-driven shocks to very high energies. Spectra of heavy ions in many large SEP events show features such as roll-overs or spectral breaks. In some events when the spectra are plotted in energy/nucleon they can be shifted relatively to each other so that the spectra align. The amount of shift is charge-to-mass ratio (Q/A) dependent and varies from event to event. In the work of Li et al. (2009), the Q/A dependences of the scaling is related to shock geometry when the CME-driven shock is close to the Sun. For events where multiple in-situ spacecraft observations exist, one may expect that different spacecraft are connected to different portions of the CME-driven shock that have different shock geometries, therefore yielding different Q/A dependence. At the same time, shock geometry can be also obtained from MHD simulations. This means we can compare shock geometry from two completely different approaches: one from MHD simulation and the other from in-situ spectral fitting. In this work, we examine this comparison for selected events.
NASA Astrophysics Data System (ADS)
Sogaro, Francesca; Poole, Robert; Dennis, David
2014-11-01
High-speed stereoscopic particle image velocimetry has been performed in fully developed turbulent pipe flow at moderate Reynolds numbers with and without a drag-reducing additive (an aqueous solution of high molecular weight polyacrylamide). Three-dimensional large and very large-scale motions (LSM and VLSM) are extracted from the flow fields by a detection algorithm and the characteristics for each case are statistically compared. The results show that the three-dimensional extent of VLSMs in drag reduced (DR) flow appears to increase significantly compared to their Newtonian counterparts. A statistical increase in azimuthal extent of DR VLSM is observed by means of two-point spatial autocorrelation of the streamwise velocity fluctuation in the radial-azimuthal plane. Furthermore, a remarkable increase in length of these structures is observed by three-dimensional two-point spatial autocorrelation. These results are accompanied by an analysis of the swirling strength in the flow field that shows a significant reduction in strength and number of the vortices for the DR flow. The findings suggest that the damping of the small scales due to polymer addition results in the undisturbed development of longer flow structures.
Möllmann, Christian; Conversi, Alessandra; Edwards, Martin
2011-08-23
Abrupt and rapid ecosystem shifts (where major reorganizations of food-web and community structures occur), commonly termed regime shifts, are changes between contrasting and persisting states of ecosystem structure and function. These shifts have been increasingly reported for exploited marine ecosystems around the world from the North Pacific to the North Atlantic. Understanding the drivers and mechanisms leading to marine ecosystem shifts is crucial in developing adaptive management strategies to achieve sustainable exploitation of marine ecosystems. An international workshop on a comparative approach to analysing these marine ecosystem shifts was held at Hamburg University, Institute for Hydrobiology and Fisheries Science, Germany on 1-3 November 2010. Twenty-seven scientists from 14 countries attended the meeting, representing specialists from seven marine regions, including the Baltic Sea, the North Sea, the Barents Sea, the Black Sea, the Mediterranean Sea, the Bay of Biscay and the Scotian Shelf off the Canadian East coast. The goal of the workshop was to conduct the first large-scale comparison of marine ecosystem regime shifts across multiple regional areas, in order to support the development of ecosystem-based management strategies. This journal is © 2011 The Royal Society
How much a galaxy knows about its large-scale environment?: An information theoretic perspective
NASA Astrophysics Data System (ADS)
Pandey, Biswajit; Sarkar, Suman
2017-05-01
The small-scale environment characterized by the local density is known to play a crucial role in deciding the galaxy properties but the role of large-scale environment on galaxy formation and evolution still remain a less clear issue. We propose an information theoretic framework to investigate the influence of large-scale environment on galaxy properties and apply it to the data from the Galaxy Zoo project that provides the visual morphological classifications of ˜1 million galaxies from the Sloan Digital Sky Survey. We find a non-zero mutual information between morphology and environment that decreases with increasing length-scales but persists throughout the entire length-scales probed. We estimate the conditional mutual information and the interaction information between morphology and environment by conditioning the environment on different length-scales and find a synergic interaction between them that operates up to at least a length-scales of ˜30 h-1 Mpc. Our analysis indicates that these interactions largely arise due to the mutual information shared between the environments on different length-scales.
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard
2013-09-06
Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.
Rosenberg, D; Marino, R; Herbert, C; Pouquet, A
2016-01-01
We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow.
Coronal hole evolution by sudden large scale changes
NASA Technical Reports Server (NTRS)
Nolte, J. T.; Gerassimenko, M.; Krieger, A. S.; Solodyna, C. V.
1978-01-01
Sudden shifts in coronal-hole boundaries observed by the S-054 X-ray telescope on Skylab between May and November, 1973, within 1 day of CMP of the holes, at latitudes not exceeding 40 deg, are compared with the long-term evolution of coronal-hole area. It is found that large-scale shifts in boundary locations can account for most if not all of the evolution of coronal holes. The temporal and spatial scales of these large-scale changes imply that they are the results of a physical process occurring in the corona. It is concluded that coronal holes evolve by magnetic-field lines' opening when the holes are growing, and by fields' closing as the holes shrink.
Analysis of central enterprise architecture elements in models of six eHealth projects.
Virkanen, Hannu; Mykkänen, Juha
2014-01-01
Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects.
ERIC Educational Resources Information Center
Feuer, Michael J.
2011-01-01
Few arguments about education are as effective at galvanizing public attention and motivating political action as those that compare the performance of students with their counterparts in other countries and that connect academic achievement to economic performance. Because data from international large-scale assessments (ILSA) have a powerful…
ERIC Educational Resources Information Center
Cresswell, John; Schwantner, Ursula; Waters, Charlotte
2015-01-01
This report reviews the major international and regional large-scale educational assessments, including international surveys, school-based surveys and household-based surveys. The report compares and contrasts the cognitive and contextual data collection instruments and implementation methods used by the different assessments in order to identify…
Circumnuclear Structures in Megamaser Host Galaxies
NASA Astrophysics Data System (ADS)
Pjanka, Patryk; Greene, Jenny E.; Seth, Anil C.; Braatz, James A.; Henkel, Christian; Lo, Fred K. Y.; Läsker, Ronald
2017-08-01
Using the Hubble Space Telescope, we identify circumnuclear (100-500 pc scale) structures in nine new H2O megamaser host galaxies to understand the flow of matter from kpc-scale galactic structures down to the supermassive black holes (SMBHs) at galactic centers. We double the sample analyzed in a similar way by Greene et al. and consider the properties of the combined sample of 18 sources. We find that disk-like structure is virtually ubiquitous when we can resolve <200 pc scales, in support of the notion that non-axisymmetries on these scales are a necessary condition for SMBH fueling. We perform an analysis of the orientation of our identified nuclear regions and compare it with the orientation of megamaser disks and the kpc-scale disks of the hosts. We find marginal evidence that the disk-like nuclear structures show increasing misalignment from the kpc-scale host galaxy disk as the scale of the structure decreases. In turn, we find that the orientation of both the ˜100 pc scale nuclear structures and their host galaxy large-scale disks is consistent with random with respect to the orientation of their respective megamaser disks.
Validation of a short qualitative food frequency list used in several German large scale surveys.
Winkler, G; Döring, A
1998-09-01
Our study aimed to test the validity of a short, qualitative food frequency list (FFL) used in several German large scale surveys. In the surveys of the MONICA project Augsburg, the FFL was used in randomly selected adults. In 1984/85, a dietary survey with 7-day records (DR) was conducted within the subsample of men aged 45 to 64 (response 70%). The 899 DR were used to validate the FFL. Mean weekly food intake frequency and mean daily food intake were compared and Spearman rank order correlation coefficients and classification into tertiles with values of the statistic Kappa were calculated. Spearman correlations range between 0.15 for the item "Other sweets (candies, compote)" and 0.60 for the items "Curds, yoghurt, sour milk", "Milk including butter milk" and "Mineral water"; values for statistic Kappa vary between 0.04 ("White bread, brown bread, crispbread") and 0.41 ("Flaked oats, muesli, cornflakes" and "milk including butter milk"). With the exception of two items, FFL data can be used for analysis on group level. Analysis on individual level should be done with caution. It seems, as if some food groups are generally easier to ask for in FFL than others.
O'Dwyer, David N; Norman, Katy C; Xia, Meng; Huang, Yong; Gurczynski, Stephen J; Ashley, Shanna L; White, Eric S; Flaherty, Kevin R; Martinez, Fernando J; Murray, Susan; Noth, Imre; Arnold, Kelly B; Moore, Bethany B
2017-04-25
Idiopathic pulmonary fibrosis (IPF) is a progressive and fatal interstitial pneumonia. The disease pathophysiology is poorly understood and the etiology remains unclear. Recent advances have generated new therapies and improved knowledge of the natural history of IPF. These gains have been brokered by advances in technology and improved insight into the role of various genes in mediating disease, but gene expression and protein levels do not always correlate. Thus, in this paper we apply a novel large scale high throughput aptamer approach to identify more than 1100 proteins in the peripheral blood of well-characterized IPF patients and normal volunteers. We use systems biology approaches to identify a unique IPF proteome signature and give insight into biological processes driving IPF. We found IPF plasma to be altered and enriched for proteins involved in defense response, wound healing and protein phosphorylation when compared to normal human plasma. Analysis also revealed a minimal protein signature that differentiated IPF patients from normal controls, which may allow for accurate diagnosis of IPF based on easily-accessible peripheral blood. This report introduces large scale unbiased protein discovery analysis to IPF and describes distinct biological processes that further inform disease biology.
Crustal evolution inferred from Apollo magnetic measurements
NASA Technical Reports Server (NTRS)
Dyal, P.; Daily, W. D.; Vanian, L. L.
1978-01-01
The topology of lunar remanent fields is investigated by analyzing simultaneous magnetometer and solar wind spectrometer data. The diffusion model proposed by Vanyan (1977) to describe the field-plasma interaction at the lunar surface is extended to describe the interaction with fields characterized by two scale lengths, and the extended model is compared with data from three Apollo landing sites (Apollo 12, 15 and 16) with crustal fields of differing intensity and topology. Local remanent field properties from this analysis are compared with high spatial resolution magnetic maps obtained from the electron reflection experiment. It is concluded that remanent fields over most of the lunar surface are characterized by spatial variations as small as a few kilometers. Large regions (50 to 100 km) of the lunar crust were probably uniformly magnetized early in the evolution of the crust. Smaller scale (5 to 10 km) magnetic sources close to the surface were left by bombardment and subsequent gardening of the upper layers of these magnetized regions. The small scale sized remanent fields of about 100 gammas are measured by surface experiments, whereas the larger scale sized fields of about 0.1 gammas are measured by the orbiting subsatellite experiments.
Multipole analysis of redshift-space distortions around cosmic voids
NASA Astrophysics Data System (ADS)
Hamaus, Nico; Cousinou, Marie-Claude; Pisani, Alice; Aubert, Marie; Escoffier, Stéphanie; Weller, Jochen
2017-07-01
We perform a comprehensive redshift-space distortion analysis based on cosmic voids in the large-scale distribution of galaxies observed with the Sloan Digital Sky Survey. To this end, we measure multipoles of the void-galaxy cross-correlation function and compare them with standard model predictions in cosmology. Merely considering linear-order theory allows us to accurately describe the data on the entire available range of scales and to probe void-centric distances down to about 2 h-1Mpc. Common systematics, such as the Fingers-of-God effect, scale-dependent galaxy bias, and nonlinear clustering do not seem to play a significant role in our analysis. We constrain the growth rate of structure via the redshift-space distortion parameter β at two median redshifts, β(bar z=0.32)=0.599+0.134-0.124 and β(bar z=0.54)=0.457+0.056-0.054, with a precision that is competitive with state-of-the-art galaxy-clustering results. While the high-redshift constraint perfectly agrees with model expectations, we observe a mild 2σ deviation at bar z=0.32, which increases to 3σ when the data is restricted to the lowest available redshift range of 0.15
NASA Technical Reports Server (NTRS)
Nuth, J. A.; Johnson, N. M.; Ferguson, F. T.; Hilchenbach, M.; Merouane, S.; Paquette, J. A.; Stenzel, O.; Cottin, H.; Fray, N.; Bardyn, A.;
2017-01-01
The COSIMA instrument onboard the Rosetta spacecraft collected and analyzed samples of individual particles from the coma of Comet67P/ Churyumov-Gerasimenko. Initial results of the analysis of several particles have been presented elsewhere while analyses of the organic component of the dust particles has been shown to compare favor ably to laboratory spectra of IOM extracted from the Orguil and Murchison meteorites. Here we will compare the spectra of the same two particles to laboratory spectra of organic grain coatings produced via Surface Mediated Reactions of CO, H2 and N2 on amorphous iron silicate grains.
HammerCloud: A Stress Testing System for Distributed Analysis
NASA Astrophysics Data System (ADS)
van der Ster, Daniel C.; Elmsheuser, Johannes; Úbeda García, Mario; Paladin, Massimo
2011-12-01
Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).
Prototype Vector Machine for Large Scale Semi-Supervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less
Scaling of muscle architecture and fiber types in the rat hindlimb.
Eng, Carolyn M; Smallwood, Laura H; Rainiero, Maria Pia; Lahey, Michele; Ward, Samuel R; Lieber, Richard L
2008-07-01
The functional capacity of a muscle is determined by its architecture and metabolic properties. Although extensive analyses of muscle architecture and fiber type have been completed in a large number of muscles in numerous species, there have been few studies that have looked at the interrelationship of these functional parameters among muscles of a single species. Nor have the architectural properties of individual muscles been compared across species to understand scaling. This study examined muscle architecture and fiber type in the rat (Rattus norvegicus) hindlimb to examine each muscle's functional specialization. Discriminant analysis demonstrated that architectural properties are a greater predictor of muscle function (as defined by primary joint action and anti-gravity or non anti-gravity role) than fiber type. Architectural properties were not strictly aligned with fiber type, but when muscles were grouped according to anti-gravity versus non-anti-gravity function there was evidence of functional specialization. Specifically, anti-gravity muscles had a larger percentage of slow fiber type and increased muscle physiological cross-sectional area. Incongruities between a muscle's architecture and fiber type may reflect the variability of functional requirements on single muscles, especially those that cross multiple joints. Additionally, discriminant analysis and scaling of architectural variables in the hindlimb across several mammalian species was used to explore whether any functional patterns could be elucidated within single muscles or across muscle groups. Several muscles deviated from previously described muscle architecture scaling rules and there was large variability within functional groups in how muscles should be scaled with body size. This implies that functional demands placed on muscles across species should be examined on the single muscle level.
NASA Astrophysics Data System (ADS)
Boyd, O. S.; Dreger, D. S.; Gritto, R.
2015-12-01
Enhanced Geothermal Systems (EGS) resource development requires knowledge of subsurface physical parameters to quantify the evolution of fracture networks. We investigate seismicity in the vicinity of the EGS development at The Geysers Prati-32 injection well to determine moment magnitude, focal mechanism, and kinematic finite-source models with the goal of developing a rupture area scaling relationship for the Geysers and specifically for the Prati-32 EGS injection experiment. Thus far we have analyzed moment tensors of M ≥ 2 events, and are developing the capability to analyze the large numbers of events occurring as a result of the fluid injection and to push the analysis to smaller magnitude earthquakes. We have also determined finite-source models for five events ranging in magnitude from M 3.7 to 4.5. The scaling relationship between rupture area and moment magnitude of these events resembles that of a published empirical relationship derived for events from M 4.5 to 8.3. We plan to develop a scaling relationship in which moment magnitude and corner frequency are predictor variables for source rupture area constrained by the finite-source modeling. Inclusion of corner frequency in the empirical scaling relationship is proposed to account for possible variations in stress drop. If successful, we will use this relationship to extrapolate to the large numbers of events in the EGS seismicity cloud to estimate the coseismic fracture density. We will present the moment tensor and corner frequency results for the micro earthquakes, and for select events, finite-source models. Stress drop inferred from corner frequencies and from finite-source modeling will be compared.
Inner-outer interactions in the convective atmospheric boundary layer
NASA Astrophysics Data System (ADS)
Salesky, S.
2017-12-01
Recently, observational and numerical studies have revealed the existence of so-called large scale motions (LSMs) that populate the logarithmic layer of wall-bounded turbulent shear flows and modulate the amplitude and frequency of turbulence dynamics near the ground. Properties of LSMs are well understood in neutrally stratified flows over smooth and rough walls. However, the implications of previous studies for the convective atmospheric boundary layer (CBL) are not entirely clear, since the morphology of both small-scale and large-scale turbulent structures is known to be strongly affected by buoyancy [e.g. Salesky et al., Bound.-Layer Meteorol. 163:41-68 (2017)]. In the present study, inner-outer interactions in the CBL are investigated using a suite of large eddy simulations spanning neutral to highly convective conditions. Simulation results reveal that, as the atmosphere becomes increasingly unstable, the inclination angle of structures near the ground increases from 12-15° to nearly 90°. Furthermore, the scale separation between the inner and outer peaks in the premultiplied velocity spectra decreases until only a single peak remains (comparable in magnitude to the boundary layer depth). The extent to which the amplitude modulation of surface layer turbulence by outer layer structures changes with increasing instability will be considered, following the decoupling procedure proposed by Mathis et al. [J. Fluid Mech., vol 628, 311-337 (2009)]. Frequency modulation of surface layer turbulence also will be examined, following the wavelet analysis approach of Baars et al. [Exp. Fluids, 56:188, (2015)].
NASA Astrophysics Data System (ADS)
Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.
2015-12-01
The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.