An informal paper on large-scale dynamic systems
NASA Technical Reports Server (NTRS)
Ho, Y. C.
1975-01-01
Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.
NASA Astrophysics Data System (ADS)
Guiquan, Xi; Lin, Cong; Xuehui, Jin
2018-05-01
As an important platform for scientific and technological development, large -scale scientific facilities are the cornerstone of technological innovation and a guarantee for economic and social development. Researching management of large-scale scientific facilities can play a key role in scientific research, sociology and key national strategy. This paper reviews the characteristics of large-scale scientific facilities, and summarizes development status of China's large-scale scientific facilities. At last, the construction, management, operation and evaluation of large-scale scientific facilities is analyzed from the perspective of sustainable development.
Methods and apparatus of analyzing electrical power grid data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.
Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less
NASA Technical Reports Server (NTRS)
Morgan, R. P.; Singh, J. P.; Rothenberg, D.; Robinson, B. E.
1975-01-01
The needs to be served, the subsectors in which the system might be used, the technology employed, and the prospects for future utilization of an educational telecommunications delivery system are described and analyzed. Educational subsectors are analyzed with emphasis on the current status and trends within each subsector. Issues which affect future development, and prospects for future use of media, technology, and large-scale electronic delivery within each subsector are included. Information on technology utilization is presented. Educational telecommunications services are identified and grouped into categories: public television and radio, instructional television, computer aided instruction, computer resource sharing, and information resource sharing. Technology based services, their current utilization, and factors which affect future development are stressed. The role of communications satellites in providing these services is discussed. Efforts to analyze and estimate future utilization of large-scale educational telecommunications are summarized. Factors which affect future utilization are identified. Conclusions are presented.
Human-Machine Cooperation in Large-Scale Multimedia Retrieval: A Survey
ERIC Educational Resources Information Center
Shirahama, Kimiaki; Grzegorzek, Marcin; Indurkhya, Bipin
2015-01-01
"Large-Scale Multimedia Retrieval" (LSMR) is the task to fast analyze a large amount of multimedia data like images or videos and accurately find the ones relevant to a certain semantic meaning. Although LSMR has been investigated for more than two decades in the fields of multimedia processing and computer vision, a more…
Large-scale Eucalyptus energy farms and power cogeneration
Robert C. Noroña
1983-01-01
A thorough evaluation of all factors possibly affecting a large-scale planting of eucalyptus is foremost in determining the cost effectiveness of the planned operation. Seven basic areas of concern must be analyzed:1. Species Selection 2. Site Preparation 3. Planting 4. Weed Control 5....
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ruchika Mehra Vijayan, E.
2017-11-01
This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language
ERIC Educational Resources Information Center
Alexopoulou, Theodora; Michel, Marije; Murakami, Akira; Meurers, Detmar
2017-01-01
Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a…
Climbing the Corporate Ladder.
ERIC Educational Resources Information Center
Smith, Christopher
The employment records of a large northeastern manufacturing plant were analyzed to test the opportunity for career advancement within a large-scale industrial establishment. The employment records analyzed covered the years 1921 through 1937 and more than 28,000 different employees (male and female). The company was selected as being…
In this paper we develop a conceptual framework for selecting stressor data and anlyzing their relationship to geographic patterns of species richness at large spatial scales. Aspects of climate and topography, which are not stressors per se, have been most strongly linked with g...
Michael Keller; Maria Assunção Silva-Dias; Daniel C. Nepstad; Meinrat O. Andreae
2004-01-01
The Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) is a multi-disciplinary, multinational scientific project led by Brazil. LBA researchers seek to understand Amazonia in its global context especially with regard to regional and global climate. Current development activities in Amazonia including deforestation, logging, cattle ranching, and agriculture...
ERIC Educational Resources Information Center
Kind, Per Morten
2013-01-01
The paper analyzes conceptualizations in the science frameworks in three large-scale assessments, Trends in Mathematics and Science Study (TIMSS), Programme for International Student Assessment (PISA), and National Assessment of Educational Progress (NAEP). The assessments have a shared history, but have developed different conceptualizations. The…
The large scale microelectronics Computer-Aided Design and Test (CADAT) system
NASA Technical Reports Server (NTRS)
Gould, J. M.
1978-01-01
The CADAT system consists of a number of computer programs written in FORTRAN that provide the capability to simulate, lay out, analyze, and create the artwork for large scale microelectronics. The function of each software component of the system is described with references to specific documentation for each software component.
NASA Astrophysics Data System (ADS)
Yue, X.; Wang, W.; Schreiner, W. S.; Kuo, Y. H.; Lei, J.; Liu, J.; Burns, A. G.; Zhang, Y.; Zhang, S.
2015-12-01
Based on slant total electron content (TEC) observations made by ~10 satellites and ~450 ground IGS GNSS stations, we constructed a 4-D ionospheric electron density reanalysis during the March 17, 2013 geomagnetic storm. Four main large-scale ionospheric disturbances are identified from reanalysis: (1) The positive storm during the initial phase; (2) The SED (storm enhanced density) structure in both northern and southern hemisphere; (3) The large positive storm in main phase; (4) The significant negative storm in middle and low latitude during recovery phase. We then run the NCAR-TIEGCM model with Heelis electric potential empirical model as polar input. The TIEGCM can reproduce 3 of 4 large-scale structures (except SED) very well. We then further analyzed the altitudinal variations of these large-scale disturbances and found several interesting things, such as the altitude variation of SED, the rotation of positive/negative storm phase with local time. Those structures could not be identified clearly by traditional used data sources, which either has no gloval coverage or no vertical resolution. The drivers such as neutral wind/density and electric field from TIEGCM simulations are also analyzed to self-consistantly explain the identified disturbance features.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
Large-scale structure of randomly jammed spheres
NASA Astrophysics Data System (ADS)
Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio
2017-05-01
We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.
ERIC Educational Resources Information Center
Hooper, Martin
2017-01-01
TIMSS and PIRLS assess representative samples of students at regular intervals, measuring trends in student achievement and student contexts for learning. Because individual students are not tracked over time, analysis of international large-scale assessment data is usually conducted cross-sectionally. Gustafsson (2007) proposed examining the data…
ERIC Educational Resources Information Center
Morgan, Robert P.; And Others
Opportunities for utilizing large-scale educational telecommunications delivery systems to aid in meeting needs of U.S. education are extensively analyzed in a NASA-funded report. Status, trends, and issues in various educational subsectors are assessed, along with current use of telecommunications and technology and factors working for and…
Home Language and Language Proficiency; A Large-Scale Longitudinal Study in Dutch Primary Schools.
ERIC Educational Resources Information Center
Driessen, Geert; van der Slik, Frans; De Bot, Kees
2002-01-01
Reports on a large-scale longitudinal study into the development of language proficiency of Dutch primary school children aged 7-10. Data on language proficiency and a range of background variables were analyzed. Results suggest that while immigrant children develop their language skill in Dutch considerably over 2 years, they are nonetheless…
Education of the handicapped child: Status, trend, and issues related to electronic delivery
NASA Technical Reports Server (NTRS)
Rothenberg, D.
1973-01-01
This study is part of a broader investigation of the role of large-scale educational telecommunications systems. Thus, data are analyzed and trends and issues discussed to provide information useful to the systems designer who wishes to identify and assess the opportunities for large-scale electronic delivery of education for the handicapped.
Mykkänen, Juha; Virkanen, Hannu; Tuomainen, Mika
2013-01-01
The governance of large eHealth initiatives requires traceability of many requirements and design decisions. We provide a model which we use to conceptually analyze variability of several enterprise architecture (EA) elements throughout the extended lifecycle of development goals using interrelated projects related to the national ePrescription in Finland.
The role of large scale motions on passive scalar transport
NASA Astrophysics Data System (ADS)
Dharmarathne, Suranga; Araya, Guillermo; Tutkun, Murat; Leonardi, Stefano; Castillo, Luciano
2014-11-01
We study direct numerical simulation (DNS) of turbulent channel flow at Reτ = 394 to investigate effect of large scale motions on fluctuating temperature field which forms a passive scalar field. Statistical description of the large scale features of the turbulent channel flow is obtained using two-point correlations of velocity components. Two-point correlations of fluctuating temperature field is also examined in order to identify possible similarities between velocity and temperature fields. The two-point cross-correlations betwen the velocity and temperature fluctuations are further analyzed to establish connections between these two fields. In addition, we use proper orhtogonal decompotion (POD) to extract most dominant modes of the fields and discuss the coupling of large scale features of turbulence and the temperature field.
HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.
Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J
2016-06-03
Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .
This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boun...
Explorative Function in Williams Syndrome Analyzed through a Large-Scale Task with Multiple Rewards
ERIC Educational Resources Information Center
Foti, F.; Petrosini, L.; Cutuli, D.; Menghini, D.; Chiarotti, F.; Vicari, S.; Mandolesi, L.
2011-01-01
This study aimed to evaluate spatial function in subjects with Williams syndrome (WS) by using a large-scale task with multiple rewards and comparing the spatial abilities of WS subjects with those of mental age-matched control children. In the present spatial task, WS participants had to explore an open space to search nine rewards placed in…
NASA Technical Reports Server (NTRS)
Turner, Richard M.; Jared, David A.; Sharp, Gary D.; Johnson, Kristina M.
1993-01-01
The use of 2-kHz 64 x 64 very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators as the input and filter planes of a VanderLugt-type optical correlator is discussed. Liquid-crystal layer thickness variations that are present in the devices are analyzed, and the effects on correlator performance are investigated through computer simulations. Experimental results from the very-large-scale-integrated / ferroelectric-liquid-crystal optical-correlator system are presented and are consistent with the level of performance predicted by the simulations.
Analysis of BJ493 diesel engine lubrication system properties
NASA Astrophysics Data System (ADS)
Liu, F.
2017-12-01
The BJ493ZLQ4A diesel engine design is based on the primary model of BJ493ZLQ3, of which exhaust level is upgraded to the National GB5 standard due to the improved design of combustion and injection systems. Given the above changes in the diesel lubrication system, its improved properties are analyzed in this paper. According to the structures, technical parameters and indices of the lubrication system, the lubrication system model of BJ493ZLQ4A diesel engine was constructed using the Flowmaster flow simulation software. The properties of the diesel engine lubrication system, such as the oil flow rate and pressure at different rotational speeds were analyzed for the schemes involving large- and small-scale oil filters. The calculated values of the main oil channel pressure are in good agreement with the experimental results, which verifies the proposed model feasibility. The calculation results show that the main oil channel pressure and maximum oil flow rate values for the large-scale oil filter scheme satisfy the design requirements, while the small-scale scheme yields too low main oil channel’s pressure and too high. Therefore, application of small-scale oil filters is hazardous, and the large-scale scheme is recommended.
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
An interactive web-based system using cloud for large-scale visual analytics
NASA Astrophysics Data System (ADS)
Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.
2015-03-01
Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.
Susan Will-Wolf; Sarah Jovan; Michael C. Amacher
2017-01-01
Our development of lichen elemental bioindicators for a United States of America (USA) national monitoring program is a useful model for other large-scale programs. Concentrations of 20 elements were measured, validated, and analyzed for 203 samples of five common lichen species. Collections were made by trained non-specialists near 75 permanent plots and an expert...
Bockholt, Henry J.; Scully, Mark; Courtney, William; Rachakonda, Srinivas; Scott, Adam; Caprihan, Arvind; Fries, Jill; Kalyanam, Ravi; Segall, Judith M.; de la Garza, Raul; Lane, Susan; Calhoun, Vince D.
2009-01-01
A neuroinformatics (NI) system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN), database system has been designed and improved through our experience with 200 research studies and 250 researchers from seven different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining. PMID:20461147
A link between nonlinear self-organization and dissipation in drift-wave turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manz, P.; Birkenmeier, G.; Stroth, U.
Structure formation and self-organization in two-dimensional drift-wave turbulence show up in many different faces. Fluctuation data from a magnetized plasma are analyzed and three mechanisms transferring kinetic energy to large-scale structures are identified. Beside the common vortex merger, clustering of vortices constituting a large-scale strain field and vortex thinning, where due to the interactions of vortices of different scales larger vortices are amplified by the smaller ones, are observed. The vortex thinning mechanism appears to be the most efficient one to generate large scale structures in drift-wave turbulence. Vortex merging as well as vortex clustering are accompanied by strong energymore » transfer to small-scale noncoherent fluctuations (dissipation) balancing the negative entropy generation due to the self-organization process.« less
Analysis and modeling of subgrid scalar mixing using numerical data
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrana, Alexandra; Johnson, Matthew C.; Harris, Mary-Jean, E-mail: aterrana@perimeterinstitute.ca, E-mail: mharris8@perimeterinstitute.ca, E-mail: mjohnson@perimeterinstitute.ca
Due to cosmic variance we cannot learn any more about large-scale inhomogeneities from the primary cosmic microwave background (CMB) alone. More information on large scales is essential for resolving large angular scale anomalies in the CMB. Here we consider cross correlating the large-scale kinetic Sunyaev Zel'dovich (kSZ) effect and probes of large-scale structure, a technique known as kSZ tomography. The statistically anisotropic component of the cross correlation encodes the CMB dipole as seen by free electrons throughout the observable Universe, providing information about long wavelength inhomogeneities. We compute the large angular scale power asymmetry, constructing the appropriate transfer functions, andmore » estimate the cosmic variance limited signal to noise for a variety of redshift bin configurations. The signal to noise is significant over a large range of power multipoles and numbers of bins. We present a simple mode counting argument indicating that kSZ tomography can be used to estimate more modes than the primary CMB on comparable scales. A basic forecast indicates that a first detection could be made with next-generation CMB experiments and galaxy surveys. This paper motivates a more systematic investigation of how close to the cosmic variance limit it will be possible to get with future observations.« less
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions
Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.
2017-01-01
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.
Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J
2017-04-12
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Sound production due to large-scale coherent structures
NASA Technical Reports Server (NTRS)
Gatski, T. B.
1979-01-01
The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.
Analyzing big data with the hybrid interval regression methods.
Huang, Chia-Hui; Yang, Keng-Chieh; Kao, Han-Ying
2014-01-01
Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.
Analyzing Big Data with the Hybrid Interval Regression Methods
Kao, Han-Ying
2014-01-01
Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes. PMID:25143968
NASA Technical Reports Server (NTRS)
Schlundt, D. W.
1976-01-01
The installed performance degradation of a swivel nozzle thrust deflector system obtained during increased vectoring angles of a large-scale test program was investigated and improved. Small-scale models were used to generate performance data for analyzing selected swivel nozzle configurations. A single-swivel nozzle design model with five different nozzle configurations and a twin-swivel nozzle design model, scaled to 0.15 size of the large-scale test hardware, were statically tested at low exhaust pressure ratios of 1.4, 1.3, 1.2, and 1.1 and vectored at four nozzle positions from 0 deg cruise through 90 deg vertical used for the VTOL mode.
Environmental status of livestock and poultry sectors in China under current transformation stage.
Qian, Yi; Song, Kaihui; Hu, Tao; Ying, Tianyu
2018-05-01
Intensive animal husbandry had aroused great environmental concerns in many developed countries. However, some developing countries are still undergoing the environmental pollution from livestock and poultry sectors. Driven by the large demand, China has experienced a remarkable increase in dairy and meat production, especially in the transformation stage from conventional household breeding to large-scale industrial breeding. At the same time, a large amount of manure from the livestock and poultry sector is released into waterbodies and soil, causing eutrophication and soil degradation. This condition will be reinforced in the large-scale cultivation where the amount of manure exceeds the soil nutrient capacity, if not treated or utilized properly. Our research aims to analyze whether the transformation of raising scale would be beneficial to the environment as well as present the latest status of livestock and poultry sectors in China. The estimation of the pollutants generated and discharged from livestock and poultry sector in China will facilitate the legislation of manure management. This paper analyzes the pollutants generated from the manure of the five principal commercial animals in different farming practices. The results show that the fattening pigs contribute almost half of the pollutants released from manure. Moreover, the beef cattle exert the largest environmental impact for unitary production, about 2-3 times of pork and 5-20 times of chicken. The animals raised with large-scale feedlots practice generate fewer pollutants than those raised in households. The shift towards industrial production of livestock and poultry is easier to manage from the environmental perspective, but adequate large-scale cultivation is encouraged. Regulation control, manure treatment and financial subsidies for the manure treatment and utilization are recommended to achieve the ecological agriculture in China. Copyright © 2017 Elsevier B.V. All rights reserved.
LARGE-SCALE PREDICTIONS OF MOBILE SOURCE CONTRIBUTIONS TO CONCENTRATIONS OF TOXIC AIR POLLUTANTS
This presentation shows concentrations and deposition of toxic air pollutants predicted by a 3-D air quality model, the Community Multi Scale Air Quality (CMAQ) modeling system. Contributions from both on-road and non-road mobile sources are analyzed.
Large-scale gene function analysis with the PANTHER classification system.
Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D
2013-08-01
The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.
The combustion behavior of large scale lithium titanate battery
Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua
2015-01-01
Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064
Derivation of large-scale cellular regulatory networks from biological time series data.
de Bivort, Benjamin L
2010-01-01
Pharmacological agents and other perturbants of cellular homeostasis appear to nearly universally affect the activity of many genes, proteins, and signaling pathways. While this is due in part to nonspecificity of action of the drug or cellular stress, the large-scale self-regulatory behavior of the cell may also be responsible, as this typically means that when a cell switches states, dozens or hundreds of genes will respond in concert. If many genes act collectively in the cell during state transitions, rather than every gene acting independently, models of the cell can be created that are comprehensive of the action of all genes, using existing data, provided that the functional units in the model are collections of genes. Techniques to develop these large-scale cellular-level models are provided in detail, along with methods of analyzing them, and a brief summary of major conclusions about large-scale cellular networks to date.
NASA Astrophysics Data System (ADS)
Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.
2003-10-01
Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
NASA Astrophysics Data System (ADS)
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
Ionospheric response to 17 March 2013 geomagnetic storm identified by data assimilation result
NASA Astrophysics Data System (ADS)
Yue, Xinan; Zhao, Biqiang; Hu, Lianhuan; She, Chengli
2017-04-01
Based on slant total electron content (TEC) observations made by 10 satellites and 450 ground IGS GNSS stations, we constructed a 4-D ionospheric electron density reanalysis during the March 17, 2013 geomagnetic storm. Four main large-scale ionospheric disturbances are identified from reanalysis: (1) The positive storm during the initial phase; (2) The SED (storm enhanced density) structure in both northern and southern hemisphere; (3) The large positive storm in main phase; (4) The significant negative storm in middle and low latitude during recovery phase. We then run the NCAR-TIEGCM model with Heelis electric potential empirical model as polar input. The TIEGCM can reproduce 3 of 4 large-scale structures (except SED) very well. We then further analyzed the altitudinal variations of these large-scale disturbances and found several interesting things, such as the altitude variation of SED, the rotation of positive/negative storm phase with local time. Those structures could not be identified clearly by traditional used data sources, which either has no global coverage or no vertical resolution. The drivers such as neutral wind/density and electric field from TIEGCM simulations are also analyzed to self-consistently explain the identified disturbance features.
Turbulent Superstructures in Rayleigh-Bénard convection at different Prandtl number
NASA Astrophysics Data System (ADS)
Schumacher, Jörg; Pandey, Ambrish; Ender, Martin; Westermann, Rüdiger; Scheel, Janet D.
2017-11-01
Large-scale patterns of the temperature and velocity field in horizontally extended cells can be considered as turbulent superstructures in Rayleigh-Bénard convection (RBC). These structures are obtained once the turbulent fluctuations are removed by a finite-time average. Their existence has been reported for example in Bailon-Cuba et al.. This large-scale order obeys a strong similarity with the well-studied patterns from the weakly nonlinear regime at lower Rayleigh number in RBC. In the present work we analyze the superstructures of RBC at different Prandtl number for Prandtl values between Pr = 0.005 for liquid sodium and 7 for water. The characteristic evolution time scales, the typical spatial extension of the rolls and the properties of the defects of the resulting superstructure patterns are analyzed. Data are obtained from well-resolved spectral element direct numerical simulations. The work is supported by the Priority Programme SPP 1881 of the Deutsche Forschungsgemeinschaft.
ERIC Educational Resources Information Center
Smith, Nathaniel J.
2011-01-01
This dissertation contains several projects, each addressing different questions with different techniques. In chapter 1, I argue that they are unified thematically by their goal of "scaling up psycholinguistics"; they are all aimed at analyzing large data-sets using tools that reveal patterns to propose and test mechanism-neutral hypotheses about…
Tropospheric transport differences between models using the same large-scale meteorological fields
NASA Astrophysics Data System (ADS)
Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.
2017-01-01
The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.
Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.
Chen, Mou; Tao, Gang
2016-08-01
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
Large scale modulation of high frequency acoustic waves in periodic porous media.
Boutin, Claude; Rallu, Antoine; Hans, Stephane
2012-12-01
This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.
Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector
NASA Astrophysics Data System (ADS)
Kumar, P.; Mishra, T.; Banerjee, R.
2017-12-01
India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
Ralph Alig; Darius Adams; John Mills; Richard Haynes; Peter Ince; Robert Moulton
2001-01-01
The TAMM/NAPAP/ATLAS/AREACHANGE(TNAA) system and the Forest and Agriculture Sector Optimization Model (FASOM) are two large-scale forestry sector modeling systems that have been employed to analyze the U.S. forest resource situation. The TNAA system of static, spatial equilibrium models has been applied to make SO-year projections of the U.S. forest sector for more...
Early childhood education: Status trends, and issues related to electronic delivery
NASA Technical Reports Server (NTRS)
Rothenberg, D.
1973-01-01
The status of, and trends and issues within, early childhood education which are related to the possibilities of electronic delivery of educational service are considered in a broader investigation of the role of large scale, satellite based educational telecommunications systems. Data are analyzed and trends and issues discussed to provide information useful to the system designer who wishes to identify and assess the opportunities for large scale electronic delivery in early childhood education.
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
Study of an engine flow diverter system for a large scale ejector powered aircraft model
NASA Technical Reports Server (NTRS)
Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.
1981-01-01
Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clauss, D.B.
The analyses used to predict the behavior of a 1:8-scale model of a steel LWR containment building to static overpressurization are described and results are presented. Finite strain, large displacement, and nonlinear material properties were accounted for using finite element methods. Three-dimensional models were needed to analyze the penetrations, which included operable equipment hatches, personnel lock representations, and a constrained pipe. It was concluded that the scale model would fail due to leakage caused by large deformations of the equipment hatch sleeves. 13 refs., 34 figs., 1 tab.
Vicini, P; Fields, O; Lai, E; Litwack, E D; Martin, A-M; Morgan, T M; Pacanowski, M A; Papaluca, M; Perez, O D; Ringel, M S; Robson, M; Sakul, H; Vockley, J; Zaks, T; Dolsten, M; Søgaard, M
2016-02-01
High throughput molecular and functional profiling of patients is a key driver of precision medicine. DNA and RNA characterization has been enabled at unprecedented cost and scale through rapid, disruptive progress in sequencing technology, but challenges persist in data management and interpretation. We analyze the state-of-the-art of large-scale unbiased sequencing in drug discovery and development, including technology, application, ethical, regulatory, policy and commercial considerations, and discuss issues of LUS implementation in clinical and regulatory practice. © 2015 American Society for Clinical Pharmacology and Therapeutics.
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Jack, Robert L.; Lecomte, Vivien
2017-03-01
We analyze large deviations of the time-averaged activity in the one-dimensional Fredrickson-Andersen model, both numerically and analytically. The model exhibits a dynamical phase transition, which appears as a singularity in the large deviation function. We analyze the finite-size scaling of this phase transition numerically, by generalizing an existing cloning algorithm to include a multicanonical feedback control: this significantly improves the computational efficiency. Motivated by these numerical results, we formulate an effective theory for the model in the vicinity of the phase transition, which accounts quantitatively for the observed behavior. We discuss potential applications of the numerical method and the effective theory in a range of more general contexts.
Exclusively Visual Analysis of Classroom Group Interactions
ERIC Educational Resources Information Center
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-01-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data…
"Second Chance": Some Theoretical and Empirical Remarks.
ERIC Educational Resources Information Center
Inbar, Dan E.; Sever, Rita
1986-01-01
Presents a conceptual framework of second-chance systems analyzable in terms of several basic parameters (targeted population, declared goals, processes, options for students, evaluation criteria, and implications for the regular system). Uses this framework to analyze an Israeli external high school, the subject of a large-scale study. Includes 3…
Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.
Gaertner, Jean-Claude; Maiorano, Porzia; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro
2013-01-01
Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.
Studies on combined model based on functional objectives of large scale complex engineering
NASA Astrophysics Data System (ADS)
Yuting, Wang; Jingchun, Feng; Jiabao, Sun
2018-03-01
As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2018-03-01
Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.
Impacts of Large-Scale Circulation on Convection: A 2-D Cloud Resolving Model Study
NASA Technical Reports Server (NTRS)
Li, X; Sui, C.-H.; Lau, K.-M.
1999-01-01
Studies of impacts of large-scale circulation on convection, and the roles of convection in heat and water balances over tropical region are fundamentally important for understanding global climate changes. Heat and water budgets over warm pool (SST=29.5 C) and cold pool (SST=26 C) were analyzed based on simulations of the two-dimensional cloud resolving model. Here the sensitivity of heat and water budgets to different sizes of warm and cold pools is examined.
Commentary: Environmental nanophotonics and energy
NASA Astrophysics Data System (ADS)
Smith, Geoff B.
2011-01-01
The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.
Pattern-based, multi-scale segmentation and regionalization of EOSD land cover
NASA Astrophysics Data System (ADS)
Niesterowicz, Jacek; Stepinski, Tomasz F.
2017-10-01
The Earth Observation for Sustainable Development of Forests (EOSD) map is a 25 m resolution thematic map of Canadian forests. Because of its large spatial extent and relatively high resolution the EOSD is difficult to analyze using standard GIS methods. In this paper we propose multi-scale segmentation and regionalization of EOSD as new methods for analyzing EOSD on large spatial scales. Segments, which we refer to as forest land units (FLUs), are delineated as tracts of forest characterized by cohesive patterns of EOSD categories; we delineated from 727 to 91,885 FLUs within the spatial extent of EOSD depending on the selected scale of a pattern. Pattern of EOSD's categories within each FLU is described by 1037 landscape metrics. A shapefile containing boundaries of all FLUs together with an attribute table listing landscape metrics make up an SQL-searchable spatial database providing detailed information on composition and pattern of land cover types in Canadian forest. Shapefile format and extensive attribute table pertaining to the entire legend of EOSD are designed to facilitate broad range of investigations in which assessment of composition and pattern of forest over large areas is needed. We calculated four such databases using different spatial scales of pattern. We illustrate the use of FLU database for producing forest regionalization maps of two Canadian provinces, Quebec and Ontario. Such maps capture the broad scale variability of forest at the spatial scale of the entire province. We also demonstrate how FLU database can be used to map variability of landscape metrics, and thus the character of landscape, over the entire Canada.
Segmentation and Quantitative Analysis of Epithelial Tissues.
Aigouy, Benoit; Umetsu, Daiki; Eaton, Suzanne
2016-01-01
Epithelia are tissues that regulate exchanges with the environment. They are very dynamic and can acquire virtually any shape; at the cellular level, they are composed of cells tightly connected by junctions. Most often epithelia are amenable to live imaging; however, the large number of cells composing an epithelium and the absence of informatics tools dedicated to epithelial analysis largely prevented tissue scale studies. Here we present Tissue Analyzer, a free tool that can be used to segment and analyze epithelial cells and monitor tissue dynamics.
Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers
NASA Technical Reports Server (NTRS)
Balasubramaniam, R.; Subramanian, R. S.
1996-01-01
The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.
Analyzing large scale genomic data on the cloud with Sparkhit
Huang, Liren; Krüger, Jan
2018-01-01
Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074
Large-scale flow experiments for managing river systems
Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.
2011-01-01
Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.
Large Scale Processes and Extreme Floods in Brazil
NASA Astrophysics Data System (ADS)
Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.
2016-12-01
Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
Scaling within the spectral function approach
NASA Astrophysics Data System (ADS)
Sobczyk, J. E.; Rocco, N.; Lovato, A.; Nieves, J.
2018-03-01
Scaling features of the nuclear electromagnetic response functions unveil aspects of nuclear dynamics that are crucial for interpreting neutrino- and electron-scattering data. In the large momentum-transfer regime, the nucleon-density response function defines a universal scaling function, which is independent of the nature of the probe. In this work, we analyze the nucleon-density response function of 12C, neglecting collective excitations. We employ particle and hole spectral functions obtained within two distinct many-body methods, both widely used to describe electroweak reactions in nuclei. We show that the two approaches provide compatible nucleon-density scaling functions that for large momentum transfers satisfy first-kind scaling. Both methods yield scaling functions characterized by an asymmetric shape, although less pronounced than that of experimental scaling functions. This asymmetry, only mildly affected by final state interactions, is mostly due to nucleon-nucleon correlations, encoded in the continuum component of the hole spectral function.
Probing the statistics of primordial fluctuations and their evolution
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique; Yokoyama, Jun'ichi
1993-01-01
The statistical distribution of fluctuations on various scales is analyzed in terms of the counts in cells of smoothed density fields, using volume-limited samples of galaxy redshift catalogs. It is shown that the distribution on large scales, with volume average of the two-point correlation function of the smoothed field less than about 0.05, is consistent with Gaussian. Statistics are shown to agree remarkably well with the negative binomial distribution, which has hierarchial correlations and a Gaussian behavior at large scales. If these observed properties correspond to the matter distribution, they suggest that our universe started with Gaussian fluctuations and evolved keeping hierarchial form.
Mach Number effects on turbulent superstructures in wall bounded flows
NASA Astrophysics Data System (ADS)
Kaehler, Christian J.; Bross, Matthew; Scharnowski, Sven
2017-11-01
Planer and three-dimensional flow field measurements along a flat plat boundary layer in the Trisonic Wind Tunnel Munich (TWM) are examined with the aim to characterize the scaling, spatial organization, and topology of large scale turbulent superstructures in compressible flow. This facility is ideal for this investigation as the ratio of boundary layer thickness to test section spanwise extent ratio is around 1/25, ensuring minimal sidewall and corner effects on turbulent structures in the center of the test section. A major difficulty in the experimental investigation of large scale features is the mutual size of the superstructures which can extend over many boundary layer thicknesses. Using multiple PIV systems, it was possible to capture the full spatial extent of large-scale structures over a range of Mach numbers from Ma = 0.3 - 3. To calculate the average large-scale structure length and spacing, the acquired vector fields were analyzed by statistical multi-point methods that show large scale structures with a correlation length of around 10 boundary layer thicknesses over the range of Mach numbers investigated. Furthermore, the average spacing between high and low momentum structures is on the order of a boundary layer thicknesses. This work is supported by the Priority Programme SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft.
Multiscale recurrence quantification analysis of order recurrence plots
NASA Astrophysics Data System (ADS)
Xu, Mengjia; Shang, Pengjian; Lin, Aijing
2017-03-01
In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.
Wang, Jinghong; Lo, Siuming; Wang, Qingsong; Sun, Jinhua; Mu, Honglin
2013-08-01
Crowd density is a key factor that influences the moving characteristics of a large group of people during a large-scale evacuation. In this article, the macro features of crowd flow and subsequent rescue strategies were considered, and a series of characteristic crowd densities that affect large-scale people movement, as well as the maximum bearing density when the crowd is extremely congested, were analyzed. On the basis of characteristic crowd densities, the queuing theory was applied to simulate crowd movement. Accordingly, the moving characteristics of the crowd and the effects of typical crowd density-which is viewed as the representation of the crowd's arrival intensity in front of the evacuation passageways-on rescue strategies was studied. Furthermore, a "risk axle of crowd density" is proposed to determine the efficiency of rescue strategies in a large-scale evacuation, i.e., whether the rescue strategies are able to effectively maintain or improve evacuation efficiency. Finally, through some rational hypotheses for the value of evacuation risk, a three-dimensional distribution of the evacuation risk is established to illustrate the risk axle of crowd density. This work aims to make some macro, but original, analysis on the risk of large-scale crowd evacuation from the perspective of the efficiency of rescue strategies. © 2012 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Yu, Garmay; A, Shvetsov; D, Karelov; D, Lebedev; A, Radulescu; M, Petukhov; V, Isaev-Ivanov
2012-02-01
Based on X-ray crystallographic data available at Protein Data Bank, we have built molecular dynamics (MD) models of homologous recombinases RecA from E. coli and D. radiodurans. Functional form of RecA enzyme, which is known to be a long helical filament, was approximated by a trimer, simulated in periodic water box. The MD trajectories were analyzed in terms of large-scale conformational motions that could be detectable by neutron and X-ray scattering techniques. The analysis revealed that large-scale RecA monomer dynamics can be described in terms of relative motions of 7 subdomains. Motion of C-terminal domain was the major contributor to the overall dynamics of protein. Principal component analysis (PCA) of the MD trajectories in the atom coordinate space showed that rotation of C-domain is correlated with the conformational changes in the central domain and N-terminal domain, that forms the monomer-monomer interface. Thus, even though C-terminal domain is relatively far from the interface, its orientation is correlated with large-scale filament conformation. PCA of the trajectories in the main chain dihedral angle coordinate space implicates a co-existence of a several different large-scale conformations of the modeled trimer. In order to clarify the relationship of independent domain orientation with large-scale filament conformation, we have performed analysis of independent domain motion and its implications on the filament geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitesell, C. D.
1980-01-01
In 1980 200 acres of eucalyptus trees were planted for a research and development biomass energy plantation bringing the total area under cultivation to 300 acres. Of this total acreage, 90 acres or 30% was planted in experimental plots. The remaining 70% of the cultivated area was closely monitored to determine the economic cost/benefit ratio of large scale biomass energy production. In the large scale plantings, standard field practices were set up for all phases of production: nursery, clearing, planting, weed control and fertilization. These practices were constantly evaluated for potential improvements in efficiency and reduced cost. Promising experimental treatmentsmore » were implemented on a large scale to test their effectiveness under field production conditions. In the experimental areas all scheduled data collection in 1980 has been completed and most measurements have been keypunched and analyzed. Soil samples and leaf samples have been analyzed for nutrient concentrations. Crop logging procedures have been set up to monitor tree growth through plant tissue analysis. An intensive computer search on biomass, nursery practices, harvesting equipment and herbicide applications has been completed through the services of the US Forest Service.« less
Non scale-invariant density perturbations from chaotic extended inflation
NASA Technical Reports Server (NTRS)
Mollerach, Silvia; Matarrese, Sabino
1991-01-01
Chaotic inflation is analyzed in the frame of scalar-tensor theories of gravity. Fluctuations in the energy density arise from quantum fluctuations of the Brans-Dicke field and of the inflation field. The spectrum of perturbations is studied for a class of models: it is non scale-invarient and, for certain values of the parameters, it has a peak. If the peak appears at astrophysically interesting scales, it may help to reconcile the Cold Dark Matter scenario for structure formation with large scale observations.
Using Syntactic Patterns to Enhance Text Analytics
ERIC Educational Resources Information Center
Meyer, Bradley B.
2017-01-01
Large scale product and service reviews proliferate and are commonly found across the web. The ability to harvest, digest and analyze a large corpus of reviews from online websites is still however a difficult problem. This problem is referred to as "opinion mining." Opinion mining is an important area of research as advances in the…
Regional gradient analysis and spatial pattern of woody plant communities in Oregon forests.
J.L. Ohmann; T.A. Spies
1998-01-01
Knowledge of regional-scale patterns of ecological community structure, and of factors that control them, is largely conceptual. Regional- and local-scale factors associated with regional variation in community composition have not been quantified. We analyzed data on woody plant species abundance from 2443 field plots across natural and seminatural forests and...
Effects of individual, community and landscape drivers on the dynamics of a wildland forest epidemic
Sarah E. Haas; J. Hall Cushman; Whalen W. Dillon; Nathan E. Rank; David M. Rizzo; Ross K. Meentemeyer
2016-01-01
The challenges posed by observing host-pathogen-environment interactions across large geographic extents and over meaningful time scales limit our ability to understand and manage wildland epidemics. We conducted a landscape-scale, longitudinal study designed to analyze the dynamics of sudden oak death (an emerging forest disease caused by Phytophthora...
Effect of nacelle on wake meandering in a laboratory scale wind turbine using LES
NASA Astrophysics Data System (ADS)
Foti, Daniel; Yang, Xiaolei; Guala, Michele; Sotiropoulos, Fotis
2015-11-01
Wake meandering, large scale motion in the wind turbine wakes, has considerable effects on the velocity deficit and turbulence intensity in the turbine wake from the laboratory scale to utility scale wind turbines. In the dynamic wake meandering model, the wake meandering is assumed to be caused by large-scale atmospheric turbulence. On the other hand, Kang et al. (J. Fluid Mech., 2014) demonstrated that the nacelle geometry has a significant effect on the wake meandering of a hydrokinetic turbine, through the interaction of the inner wake of the nacelle vortex with the outer wake of the tip vortices. In this work, the significance of the nacelle on the wake meandering of a miniature wind turbine previously used in experiments (Howard et al., Phys. Fluid, 2015) is demonstrated with large eddy simulations (LES) using immersed boundary method with fine enough grids to resolve the turbine geometric characteristics. The three dimensionality of the wake meandering is analyzed in detail through turbulent spectra and meander reconstruction. The computed flow fields exhibit wake dynamics similar to those observed in the wind tunnel experiments and are analyzed to shed new light into the role of the energetic nacelle vortex on wake meandering. This work was supported by Department of Energy DOE (DE-EE0002980, DE-EE0005482 and DE-AC04-94AL85000), and Sandia National Laboratories. Computational resources were provided by Sandia National Laboratories and the University of Minnesota Supercomputing.
Cyclicity in Upper Mississippian Bangor Limestone, Blount County, Alabama
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronner, R.L.
1988-01-01
The Upper Mississippian (Chesterian) Bangor Limestone in Alabama consists of a thick, complex sequence of carbonate platform deposits. A continuous core through the Bangor on Blount Mountain in north-central Alabama provides the opportunity to analyze the unit for cyclicity and to identify controls on vertical facies sequence. Lithologies from the core represent four general environments of deposition: (1) subwave-base, open marine, (2) shoal, (3) lagoon, and (4) peritidal. Analysis of the vertical sequence of lithologies in the core indicates the presence of eight large-scale cycles dominated by subtidal deposits, but defined on the basis of peritidal caps. These large-scale cyclesmore » can be subdivided into 16 small-scale cycles that may be entirely subtidal but illustrate upward shallowing followed by rapid deepening. Large-scale cycles range from 33 to 136 ft thick, averaging 68 ft; small-scale cycles range from 5 to 80 ft thick and average 34 ft. Small-scale cycles have an average duration of approximately 125,000 years, which is compatible with Milankovitch periodicity. The large-scale cycles have an average duration of approximately 250,000 years, which may simply reflect variations in amplitude of sea level fluctuation or the influence of tectonic subsidence along the southeastern margin of the North American craton.« less
On the Instability of Large Slopes in the Upstream of Wu River, Taiwan
NASA Astrophysics Data System (ADS)
Shou, Keh-Jian; Lin, Jia-Fei
2015-04-01
Considering the existence of various types of landslides (shallow and deep-seated) and the importance of protection targets (the landslide might affect a residential area, cut a road, isolate a village, etc.), this study aims to analyze the landslide susceptibility along the Lixing Industrial Road, i.e., Nantou County Road # 89, in the upstream of Wu River. Focusing on the selected typical large scale landslides, the data and information of the landslides were collected from the field and the government (including the local government, the Soil and Water Conservation Bureau, and the highway agencies). Based on the data of Li-DAR and the information from boreholes, the temporal behavior and the complex mechanism of large scale landslides were analyzed. To assess the spatial hazard of the landslides, probabilistic analysis was applied. The study of the landslide mechanism can help to understand the behavior of landslides in similar geologic conditions, and the results of hazard analysis can be applied for risk prevention and management in the study area.
ERIC Educational Resources Information Center
Blaney, Jennifer; Filer, Kimberly; Lyon, Julie
2014-01-01
Critical reflection allows students to synthesize their learning and deepen their understanding of an experience (Ash & Clayton, 2009). A recommended reflection method is for students to write essays about their experiences. However, on a large scale, such reflection essays become difficult to analyze in a meaningful way. At Roanoke College,…
Academic-industrial partnerships in drug discovery in the age of genomics.
Harris, Tim; Papadopoulos, Stelios; Goldstein, David B
2015-06-01
Many US FDA-approved drugs have been developed through productive interactions between the biotechnology industry and academia. Technological breakthroughs in genomics, in particular large-scale sequencing of human genomes, is creating new opportunities to understand the biology of disease and to identify high-value targets relevant to a broad range of disorders. However, the scale of the work required to appropriately analyze large genomic and clinical data sets is challenging industry to develop a broader view of what areas of work constitute precompetitive research. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel
NASA Technical Reports Server (NTRS)
Wang, John T.; Lotts, Christine G.; Sleight, David W.
1999-01-01
This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.
Global properties of the plasma in the outer heliosphere. I - Large-scale structure and evolution
NASA Technical Reports Server (NTRS)
Barnes, A.; Mihalov, J. D.; Gazis, P. R.; Lazarus, A. J.; Belcher, J. W.; Gordon, G. S., Jr.; Mcnutt, R. L., Jr.
1992-01-01
Pioneers 10 and 11, and Voyager 2, have active plasma analyzers as they proceed through heliocentric distances of the order of 30-50 AU, facilitating comparative studies of the global character of the outer solar wind and its variation over the solar cycle. Careful study of these data show that wind ion temperature remains constant beyond 15 AU, and that there may be large-scale variations of temperature with celestial longitude and heliographic latitude. There has thus far been no indication of a heliospheric terminal shock.
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
NASA Astrophysics Data System (ADS)
Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.
2016-12-01
National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.
Astakhov, Vadim
2009-01-01
Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment.
Large-scale magnetic fields, non-Gaussianity, and gravitational waves from inflation
NASA Astrophysics Data System (ADS)
Bamba, Kazuharu
2017-12-01
We explore the generation of large-scale magnetic fields in the so-called moduli inflation. The hypercharge electromagnetic fields couple to not only a scalar field but also a pseudoscalar one, so that the conformal invariance of the hypercharge electromagnetic fields can be broken. We explicitly analyze the strength of the magnetic fields on the Hubble horizon scale at the present time, the local non-Gaussianity of the curvature perturbations originating from the massive gauge fields, and the tensor-to-scalar ratio of the density perturbations. As a consequence, we find that the local non-Gaussianity and the tensor-to-scalar ratio are compatible with the recent Planck results.
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
Gene Expression Analysis: Teaching Students to Do 30,000 Experiments at Once with Microarray
ERIC Educational Resources Information Center
Carvalho, Felicia I.; Johns, Christopher; Gillespie, Marc E.
2012-01-01
Genome scale experiments routinely produce large data sets that require computational analysis, yet there are few student-based labs that illustrate the design and execution of these experiments. In order for students to understand and participate in the genomic world, teaching labs must be available where students generate and analyze large data…
ERIC Educational Resources Information Center
York, Travis; Becker, Christian
2012-01-01
Despite increased attention for environmental sustainability programming, large-scale adoption of pro-environmental behaviors has been slow and largely short-term. This article analyzes the crucial role of ethics in this respect. The authors utilize an interdisciplinary approach drawing on virtue ethics and cognitive development theory to…
John B. Bradford; Peter Weishampel; Marie-Louise Smith; Randall Kolka; Richard A. Birdsey; Scott V. Ollinger; Michael G. Ryan
2010-01-01
Assessing forest carbon storage and cycling over large areas is a growing challenge that is complicated by the inherent heterogeneity of forest systems. Field measurements must be conducted and analyzed appropriately to generate precise estimates at scales large enough for mapping or comparison with remote sensing data. In this study we examined...
The Renewed Primary School in Belgium: Analysis of the Local Innovation Policy.
ERIC Educational Resources Information Center
Vandenberghe, Roland
The Renewed Primary School project in Belgium is analyzed in this paper in terms of organizational response to a large-scale innovation, which is characterized by its multidimensionality, by the large number of participating schools, and by a complex support structure. Section 2 of the report presents an elaborated description of these…
Large-Scale Diversity of Slope Fishes: Pattern Inconsistency between Multiple Diversity Indices
Gaertner, Jean-Claude; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A.; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro
2013-01-01
Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3′- 45°7′ N; 5°3′W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness. PMID:23843962
A comprehensive study on urban true orthorectification
Zhou, G.; Chen, W.; Kelmelis, J.A.; Zhang, Dongxiao
2005-01-01
To provide some advanced technical bases (algorithms and procedures) and experience needed for national large-scale digital orthophoto generation and revision of the Standards for National Large-Scale City Digital Orthophoto in the National Digital Orthophoto Program (NDOP), this paper presents a comprehensive study on theories, algorithms, and methods of large-scale urban orthoimage generation. The procedures of orthorectification for digital terrain model (DTM)-based and digital building model (DBM)-based orthoimage generation and their mergence for true orthoimage generation are discussed in detail. A method of compensating for building occlusions using photogrammetric geometry is developed. The data structure needed to model urban buildings for accurately generating urban orthoimages is presented. Shadow detection and removal, the optimization of seamline for automatic mosaic, and the radiometric balance of neighbor images are discussed. Street visibility analysis, including the relationship between flight height, building height, street width, and relative location of the street to the imaging center, is analyzed for complete true orthoimage generation. The experimental results demonstrated that our method can effectively and correctly orthorectify the displacements caused by terrain and buildings in urban large-scale aerial images. ?? 2005 IEEE.
Who Should Join the Environmental Response Laboratory Network
Laboratories that analyze biological samples, chemical warfare agents, radiological, or toxic industrial chemical samples can join the ERLN. Members make up a critical infrastructure that delivers data necessary for responses to large scale emergencies.
Torsional Oscillations in a Global Solar Dynamo
NASA Astrophysics Data System (ADS)
Beaudoin, P.; Charbonneau, P.; Racine, E.; Smolarkiewicz, P. K.
2013-02-01
We characterize and analyze rotational torsional oscillations developing in a large-eddy magnetohydrodynamical simulation of solar convection (Ghizaru, Charbonneau, and Smolarkiewicz, Astrophys. J. Lett. 715, L133, 2010; Racine et al., Astrophys. J. 735, 46, 2011) producing an axisymmetric, large-scale, magnetic field undergoing periodic polarity reversals. Motivated by the many solar-like features exhibited by these oscillations, we carry out an analysis of the large-scale zonal dynamics. We demonstrate that simulated torsional oscillations are not driven primarily by the periodically varying large-scale magnetic torque, as one might have expected, but rather via the magnetic modulation of angular-momentum transport by the large-scale meridional flow. This result is confirmed by a straightforward energy analysis. We also detect a fairly sharp transition in rotational dynamics taking place as one moves from the base of the convecting layers to the base of the thin tachocline-like shear layer formed in the stably stratified fluid layers immediately below. We conclude by discussing the implications of our analyses with regard to the mechanism of amplitude saturation in the global dynamo operating in the simulation, and speculate on the possible precursor value of torsional oscillations for the forecast of solar-cycle characteristics.
NASA Astrophysics Data System (ADS)
Langford, Z. L.; Kumar, J.; Hoffman, F. M.
2015-12-01
Observations indicate that over the past several decades, landscape processes in the Arctic have been changing or intensifying. A dynamic Arctic landscape has the potential to alter ecosystems across a broad range of scales. Accurate characterization is useful to understand the properties and organization of the landscape, optimal sampling network design, measurement and process upscaling and to establish a landscape-based framework for multi-scale modeling of ecosystem processes. This study seeks to delineate the landscape at Seward Peninsula of Alaska into ecoregions using large volumes (terabytes) of high spatial resolution satellite remote-sensing data. Defining high-resolution ecoregion boundaries is difficult because many ecosystem processes in Arctic ecosystems occur at small local to regional scales, which are often resolved in by coarse resolution satellites (e.g., MODIS). We seek to use data-fusion techniques and data analytics algorithms applied to Phased Array type L-band Synthetic Aperture Radar (PALSAR), Interferometric Synthetic Aperture Radar (IFSAR), Satellite for Observation of Earth (SPOT), WorldView-2, WorldView-3, and QuickBird-2 to develop high-resolution (˜5m) ecoregion maps for multiple time periods. Traditional analysis methods and algorithms are insufficient for analyzing and synthesizing such large geospatial data sets, and those algorithms rarely scale out onto large distributed- memory parallel computer systems. We seek to develop computationally efficient algorithms and techniques using high-performance computing for characterization of Arctic landscapes. We will apply a variety of data analytics algorithms, such as cluster analysis, complex object-based image analysis (COBIA), and neural networks. We also propose to use representativeness analysis within the Seward Peninsula domain to determine optimal sampling locations for fine-scale measurements. This methodology should provide an initial framework for analyzing dynamic landscape trends in Arctic ecosystems, such as shrubification and disturbances, and integration of ecoregions into multi-scale models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Shaughnessy, Eric; Margolis, Robert
2017-04-01
The vast majority of U.S. residential solar PV installers are small local-scale companies, however the industry is relatively concentrated in a few large national-scale installers. We develop a novel approach using solar PV quote data to study the price behavior of large solar PV installers in the United States. Through a paired differences approach, we find that large installer quotes are about higher, on average, than non-large installer quotes made to the same customer. The difference is statistically significant and robust after controlling for factors such as system size, equipment quality, and time effects. The results suggest that low pricesmore » are not the primary value proposition of large installer systems. We explore several hypotheses for this finding, including that large installers are able to exercise some market power and/or earn returns from reputations.« less
Quantifying Stock Return Distributions in Financial Markets
Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593
Quantifying Stock Return Distributions in Financial Markets.
Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.
CImbinator: a web-based tool for drug synergy analysis in small- and large-scale datasets.
Flobak, Åsmund; Vazquez, Miguel; Lægreid, Astrid; Valencia, Alfonso
2017-08-01
Drug synergies are sought to identify combinations of drugs particularly beneficial. User-friendly software solutions that can assist analysis of large-scale datasets are required. CImbinator is a web-service that can aid in batch-wise and in-depth analyzes of data from small-scale and large-scale drug combination screens. CImbinator offers to quantify drug combination effects, using both the commonly employed median effect equation, as well as advanced experimental mathematical models describing dose response relationships. CImbinator is written in Ruby and R. It uses the R package drc for advanced drug response modeling. CImbinator is available at http://cimbinator.bioinfo.cnio.es , the source-code is open and available at https://github.com/Rbbt-Workflows/combination_index . A Docker image is also available at https://hub.docker.com/r/mikisvaz/rbbt-ci_mbinator/ . asmund.flobak@ntnu.no or miguel.vazquez@cnio.es. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Pesaran, Bijan; Vinck, Martin; Einevoll, Gaute T; Sirota, Anton; Fries, Pascal; Siegel, Markus; Truccolo, Wilson; Schroeder, Charles E; Srinivasan, Ramesh
2018-06-25
New technologies to record electrical activity from the brain on a massive scale offer tremendous opportunities for discovery. Electrical measurements of large-scale brain dynamics, termed field potentials, are especially important to understanding and treating the human brain. Here, our goal is to provide best practices on how field potential recordings (electroencephalograms, magnetoencephalograms, electrocorticograms and local field potentials) can be analyzed to identify large-scale brain dynamics, and to highlight critical issues and limitations of interpretation in current work. We focus our discussion of analyses around the broad themes of activation, correlation, communication and coding. We provide recommendations for interpreting the data using forward and inverse models. The forward model describes how field potentials are generated by the activity of populations of neurons. The inverse model describes how to infer the activity of populations of neurons from field potential recordings. A recurring theme is the challenge of understanding how field potentials reflect neuronal population activity given the complexity of the underlying brain systems.
NASA Astrophysics Data System (ADS)
Martin, A. C. H.; Boutin, J.; Hauser, D.; Dinnat, E. P.
2014-08-01
The impact of the ocean surface roughness on the ocean L-band emissivity is investigated using simultaneous airborne measurements from an L-band radiometer (CAROLS) and from a C-band scatterometer (STORM) acquired in the Gulf of Biscay (off-the French Atlantic coasts) in November 2010. Two synergetic approaches are used to investigate the impact of surface roughness on the L-band brightness temperature (Tb). First, wind derived from the scatterometer measurements is used to analyze the roughness contribution to Tb as a function of wind and compare it with the one simulated by SMOS and Aquarius roughness models. Then residuals from this mean relationship are analyzed in terms of mean square slope derived from the STORM instrument. We show improvement of new radiometric roughness models derived from SMOS and Aquarius satellite measurements in comparison with prelaunch models. Influence of wind azimuth on Tb could not be evidenced from our data set. However, we point out the importance of taking into account large roughness scales (>20 cm) in addition to small roughness scale (5 cm) rapidly affected by wind to interpret radiometric measurements far from nadir. This was made possible thanks to simultaneous estimates of large and small roughness scales using STORM at small (7-16°) and large (30°) incidence angles.
Niama, Fabien Roch; Vidal, Nicole; Diop-Ndiaye, Halimatou; Nguimbi, Etienne; Ahombo, Gabriel; Diakabana, Philippe; Bayonne Kombo, Édith Sophie; Mayengue, Pembe Issamou; Kobawila, Simon-Charles; Parra, Henri Joseph; Toure-Kane, Coumba
2017-07-05
In this work, we investigated the genetic diversity of HIV-1 and the presence of mutations conferring antiretroviral drug resistance in 50 drug-naïve infected persons in the Republic of Congo (RoC). Samples were obtained before large-scale access to HAART in 2002 and 2004. To assess the HIV-1 genetic recombination, the sequencing of the pol gene encoding a protease and partial reverse transcriptase was performed and analyzed with updated references, including newly characterized CRFs. The assessment of drug resistance was conducted according to the WHO protocol. Among the 50 samples analyzed for the pol gene, 50% were classified as intersubtype recombinants, charring complex structures inside the pol fragment. Five samples could not be classified (noted U). The most prevalent subtypes were G with 10 isolates and D with 11 isolates. One isolate of A, J, H, CRF05, CRF18 and CRF37 were also found. Two samples (4%) harboring the mutations M230L and Y181C associated with the TAMs M41L and T215Y, respectively, were found. This first study in the RoC, based on WHO classification, shows that the threshold of transmitted drug resistance before large-scale access to antiretroviral therapy is 4%.
A Large Scale Dynamical System Immune Network Modelwith Finite Connectivity
NASA Astrophysics Data System (ADS)
Uezu, T.; Kadono, C.; Hatchett, J.; Coolen, A. C. C.
We study a model of an idiotypic immune network which was introduced by N. K. Jerne. It is known that in immune systems there generally exist several kinds of immune cells which can recognize any particular antigen. Taking this fact into account and assuming that each cell interacts with only a finite number of other cells, we analyze a large scale immune network via both numerical simulations and statistical mechanical methods, and show that the distribution of the concentrations of antibodies becomes non-trivial for a range of values of the strength of the interaction and the connectivity.
Investigation of the Large Scale Evolution and Topology of Coronal Mass Ejections in the Solar Wind
NASA Technical Reports Server (NTRS)
Riley, Peter
1999-01-01
This investigation is concerned with the large-scale evolution and topology of Coronal Mass Ejections (CMEs) in the solar wind. During this reporting period we have analyzed a series of low density intervals in the ACE (Advanced Composition Explorer) plasma data set that bear many similarities to CMEs. We have begun a series of 3D, MHD (Magnetohydrodynamics) coronal models to probe potential causes of these events. We also edited two manuscripts concerning the properties of CMEs in the solar wind. One was re-submitted to the Journal of Geophysical Research.
FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing
2010-01-01
Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.
Combined heat and power supply using Carnot engines
NASA Astrophysics Data System (ADS)
Horlock, J. H.
The Marshall Report on the thermodynamic and economic feasibility of introducing large scale combined heat and electrical power generation (CHP) into the United Kingdom is summarized. Combinations of reversible power plant (Carnot engines) to meet a given demand of power and heat production are analyzed. The Marshall Report states that fairly large scale CHP plants are an attractive energy saving option for areas of high heat load densities. Analysis shows that for given requirements, the total heat supply and utilization factor are functions of heat output, reservoir supply temperature, temperature of heat rejected to the reservoir, and an intermediate temperature for district heating.
Remote sensing of the biological dynamics of large-scale salt evaporation ponds
NASA Technical Reports Server (NTRS)
Richardson, Laurie L.; Bachoon, Dave; Ingram-Willey, Vebbra; Chow, Colin C.; Weinstock, Kenneth
1992-01-01
Optical properties of salt evaporation ponds associated with Exportadora de Sal, a salt production company in Baja California Sur, Mexico, were analyzed using a combination of spectroradiometer and extracted pigment data, and Landsat-5 Thematic Mapper imagery. The optical characteristics of each pond are determined by the biota, which consists of dense populations of algae and photosynthetic bacteria containing a wide variety of photosynthetic and photoprotective pigments. Analysis has shown that spectral and image data can differentiate between taxonomic groups of the microbiota, detect changes in population distributions, and reveal large-scale seasonal dynamics.
Statistical simulation of the magnetorotational dynamo.
Squire, J; Bhattacharjee, A
2015-02-27
Turbulence and dynamo induced by the magnetorotational instability (MRI) are analyzed using quasilinear statistical simulation methods. It is found that homogenous turbulence is unstable to a large-scale dynamo instability, which saturates to an inhomogenous equilibrium with a strong dependence on the magnetic Prandtl number (Pm). Despite its enormously reduced nonlinearity, the dependence of the angular momentum transport on Pm in the quasilinear model is qualitatively similar to that of nonlinear MRI turbulence. This demonstrates the importance of the large-scale dynamo and suggests how dramatically simplified models may be used to gain insight into the astrophysically relevant regimes of very low or high Pm.
Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G
2016-05-25
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.
Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.
2016-01-01
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162
ERIC Educational Resources Information Center
Marshall, Althea T.
2010-01-01
Purpose: The purpose of this investigation was to examine the impact of Chromosome 4p- syndrome on the communication and expressive language phenotype of a large cross-cultural population of children, adolescents, and adults. Method: A large-scale survey study was conducted and a descriptive research design was used to analyze quantitative and…
Simulation research on the process of large scale ship plane segmentation intelligent workshop
NASA Astrophysics Data System (ADS)
Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei
2017-04-01
Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.
Predicting protein functions from redundancies in large-scale protein interaction networks
NASA Technical Reports Server (NTRS)
Samanta, Manoj Pratim; Liang, Shoudan
2003-01-01
Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.
Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.
Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner
2016-01-01
Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.
Aqueous Two-Phase Systems at Large Scale: Challenges and Opportunities.
Torres-Acosta, Mario A; Mayolo-Deloisa, Karla; González-Valdez, José; Rito-Palomares, Marco
2018-06-07
Aqueous two-phase systems (ATPS) have proved to be an efficient and integrative operation to enhance recovery of industrially relevant bioproducts. After ATPS discovery, a variety of works have been published regarding their scaling from 10 to 1000 L. Although ATPS have achieved high recovery and purity yields, there is still a gap between their bench-scale use and potential industrial applications. In this context, this review paper critically analyzes ATPS scale-up strategies to enhance the potential industrial adoption. In particular, large-scale operation considerations, different phase separation procedures, the available optimization techniques (univariate, response surface methodology, and genetic algorithms) to maximize recovery and purity and economic modeling to predict large-scale costs, are discussed. ATPS intensification to increase the amount of sample to process at each system, developing recycling strategies and creating highly efficient predictive models, are still areas of great significance that can be further exploited with the use of high-throughput techniques. Moreover, the development of novel ATPS can maximize their specificity increasing the possibilities for the future industry adoption of ATPS. This review work attempts to present the areas of opportunity to increase ATPS attractiveness at industrial levels. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Demonstration of Regenerable, Large-scale Ion Exchange System Using WBA Resin in Rialto, CA
2008-03-01
Saturation Index MCL – Maximum Contaminant Level NaOH – Sodium hydroxide NDBA – N-nitrosodi-n-butylamine NDEA – N-nitrosodiethylamine NDMA ...analyzed using EPA Method 521. NDMA was 2.6 ppt with a detection limit of 2 ppt. All other nitrosamines analyzed (including NDEA, NDBA, NDPA, NMEA...using IC/MS/MS. Nitrosamines were analyzed using EPA Method 521. NDMA was 2.6 ppt with a detection limit of 2 ppt. All other nitrosamines
Demonstration of Regenerable, Large-Scale Ion Exchange System Using WBA Resin in Rialto, CA
2008-03-05
NDMA – N-nitrosodimethylamine NDPA – N-nitrosodi-n-propylamine NMEA – N-nitrosomethylethylamine NMOR – N-nitrosomorpholine NPDES – National Pollutant...were analyzed using EPA Method 521. NDMA was 2.6 ppt with a detection limit of 2 ppt. All other nitrosamines analyzed (including NDEA, NDBA, NDPA...ppb) using IC/MS/MS. Nitrosamines were analyzed using EPA Method 521. NDMA was 2.6 ppt with a detection limit of 2 ppt. All other nitrosamines
The distribution of free electrons in the inner galaxy from pulsar dispersion measures
NASA Technical Reports Server (NTRS)
Harding, D. S.; Harding, A. K.
1981-01-01
The dispersion measures of a sample of 149 pulsars in the inner Galaxy (absolute value of l 50 deg) were statistically analyzed to deduce the large-scale distribution of free thermal electrons in this region. The dispersion measure distribution of these pulsars shows significant evidence for a decrease in the electron scale height from a local value greater than the pulsar scale height to a value less than the pulsar scale height at galactocentric radii inside of approximately 7 kpc. An increase in the electron density (to a value around .15/cu cm at 4 to 5 kpc) must accompany such a decrease in scale height. There is also evidence for a large-scale warp in the electron distribution below the b + 0 deg plane inside the Solar circle. A model is proposed for the electron distribution which incorporates these features and Monte Carlo generated dispersion measure distributions are presented for parameters which best reproduce the observed pulsar distributions.
NASA Astrophysics Data System (ADS)
Venegas-González, Alejandro; Chagas, Matheus Peres; Anholetto Júnior, Claudio Roberto; Alvares, Clayton Alcarde; Roig, Fidel Alejandro; Tomazello Filho, Mario
2016-01-01
We explored the relationship between tree growth in two tropical species and local and large-scale climate variability in Southeastern Brazil. Tree ring width chronologies of Tectona grandis (teak) and Pinus caribaea (Caribbean pine) trees were compared with local (Water Requirement Satisfaction Index—WRSI, Standardized Precipitation Index—SPI, and Palmer Drought Severity Index—PDSI) and large-scale climate indices that analyze the equatorial pacific sea surface temperature (Trans-Niño Index-TNI and Niño-3.4-N3.4) and atmospheric circulation variations in the Southern Hemisphere (Antarctic Oscillation-AAO). Teak trees showed positive correlation with three indices in the current summer and fall. A significant correlation between WRSI index and Caribbean pine was observed in the dry season preceding tree ring formation. The influence of large-scale climate patterns was observed only for TNI and AAO, where there was a radial growth reduction in months preceding the growing season with positive values of the TNI in teak trees and radial growth increase (decrease) during December (March) to February (May) of the previous (current) growing season with positive phase of the AAO in teak (Caribbean pine) trees. The development of a new dendroclimatological study in Southeastern Brazil sheds light to local and large-scale climate influence on tree growth in recent decades, contributing in future climate change studies.
NASA Astrophysics Data System (ADS)
Plebe, Alice; Grasso, Giorgio
2016-12-01
This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.
Physical and human dimensions of deforestation in Amazonia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skole, D.L.; Chomentowski, W.H.; Salas W.A.
1994-05-01
In the Brazilian Amazon, regional trends are influenced by large scale external forces but mediated by local conditions. Tropical deforestation has a large influence on global hydrology, climate and biogeochemical cycles, but understanding is inadequate because of a lack of accurate measurements of rate, geographic extent and spatial patterns and lack of insight into its causes including interrelated social, economic and environmental factors. This article proposes an interdisciplinary approach for analyzing tropical deforestation in the Brazilian Amazon. The first part shows how deforestation can be measured from satellite remote sensing and sociodemographic and economic data. The second part proposes anmore » explanatory model, considering the relationship among deforestation and large scale social, economic, and institutional factors. 43 refs., 8 figs.« less
Scaling relations for large Martian valleys
NASA Astrophysics Data System (ADS)
Som, Sanjoy M.; Montgomery, David R.; Greenberg, Harvey M.
2009-02-01
The dendritic morphology of Martian valley networks, particularly in the Noachian highlands, has long been argued to imply a warmer, wetter early Martian climate, but the character and extent of this period remains controversial. We analyzed scaling relations for the 10 large valley systems incised in terrain of various ages, resolvable using the Mars Orbiter Laser Altimeter (MOLA) and the Thermal Emission Imaging System (THEMIS). Four of the valleys originate in point sources with negligible contributions from tributaries, three are very poorly dissected with a few large tributaries separated by long uninterrupted trunks, and three exhibit the dendritic, branching morphology typical of terrestrial channel networks. We generated width-area and slope-area relationships for each because these relations are identified as either theoretically predicted or robust terrestrial empiricisms for graded precipitation-fed, perennial channels. We also generated distance-area relationships (Hack's law) because they similarly represent robust characteristics of terrestrial channels (whether perennial or ephemeral). We find that the studied Martian valleys, even the dendritic ones, do not satisfy those empiricisms. On Mars, the width-area scaling exponent b of -0.7-4.7 contrasts with values of 0.3-0.6 typical of terrestrial channels; the slope-area scaling exponent $\\theta$ ranges from -25.6-5.5, whereas values of 0.3-0.5 are typical on Earth; the length-area, or Hack's exponent n ranges from 0.47 to 19.2, while values of 0.5-0.6 are found on Earth. None of the valleys analyzed satisfy all three relations typical of terrestrial perennial channels. As such, our analysis supports the hypotheses that ephemeral and/or immature channel morphologies provide the closest terrestrial analogs to the dendritic networks on Mars, and point source discharges provide terrestrial analogs best suited to describe the other large Martian valleys.
Multiscale structure of time series revealed by the monotony spectrum.
Vamoş, Călin
2017-03-01
Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.
Hydropower and sustainability: resilience and vulnerability in China's powersheds.
McNally, Amy; Magee, Darrin; Wolf, Aaron T
2009-07-01
Large dams represent a whole complex of social, economic and ecological processes, perhaps more than any other large infrastructure project. Today, countries with rapidly developing economies are constructing new dams to provide energy and flood control to growing populations in riparian and distant urban communities. If the system is lacking institutional capacity to absorb these physical and institutional changes there is potential for conflict, thereby threatening human security. In this paper, we propose analyzing sustainability (political, socioeconomic, and ecological) in terms of resilience versus vulnerability, framed within the spatial abstraction of a powershed. The powershed framework facilitates multi-scalar and transboundary analysis while remaining focused on the questions of resilience and vulnerability relating to hydropower dams. Focusing on examples from China, this paper describes the complex nature of dams using the sustainability and powershed frameworks. We then analyze the roles of institutions in China to understand the relationships between power, human security and the socio-ecological system. To inform the study of conflicts over dams China is a particularly useful case study because we can examine what happens at the international, national and local scales. The powershed perspective allows us to examine resilience and vulnerability across political boundaries from a dynamic, process-defined analytical scale while remaining focused on a host of questions relating to hydro-development that invoke drivers and impacts on national and sub-national scales. The ability to disaggregate the affects of hydropower dam construction from political boundaries allows for a deeper analysis of resilience and vulnerability. From our analysis we find that reforms in China's hydropower sector since 1996 have been motivated by the need to create stability at the national scale rather than resilient solutions to China's growing demand for energy and water resource control at the local and international scales. Some measures that improved economic development through the market economy and a combination of dam construction and institutional reform may indeed improve hydro-political resilience at a single scale. However, if China does address large-scale hydropower construction's potential to create multi-scale geopolitical tensions, they may be vulnerable to conflict - though not necessarily violent - in domestic and international political arenas. We conclude with a look toward a resilient basin institution for the Nu/Salween River, the site of a proposed large-scale hydropower development effort in China and Myanmar.
Analysis of labor productivity using large-scale data of firm's financial statements
NASA Astrophysics Data System (ADS)
Ikeda, Y.; Souma, W.; Aoyama, H.; Fujiwara, Y.; Iyetomi, H.
2010-08-01
We investigated labor productivity distribution by analyzing large-scale financial statement data consisting of listed and unlisted Japanese firms to clarify the characteristics of the Japanese labor market. Both high and low productivity sides of the labor productivity distribution follows the power-law distribution. Large inequality in the low productivity side was observed only for the manufacturing sectors in Japan fiscal year (JFY) 1999 and observed for both the manufacturing and non-manufacturing sectors in JFY 2002. The decline in the Japanese GDP in JFY 1999 and JFY 2002 were coincided with the large inequality in the low productivity side of the distribution. A lower peak was found for all non-manufacturing sectors. This might be the origin of the low productivity of the non-manufacturing sectors reported in recent economic studies.
J. Danilo Chinea; Eileen H. Helmer
2003-01-01
The extensive recovery from agricultural clearing of Puerto Rican forests over the past half-century provides a good opportunity to study tropical forest recovery on a landscape scale. Using ordination and regression techniques, we analyzed forest inventory data from across Puerto Ricoâs moist and wet secondary forests to evaluate their species composition and whether...
J. Danilo Chinea; Eileen H. Helmer
2003-01-01
The extensive recovery from agricultural clearing of Puerto Rican forests over the past half-century provides a good opportunity to study tropical forest recovery on a landscape scale. Using ordination and regression techniques, we analyzed forest inventory data from across Puerto Ricoâs moist and wet secondary forests to evaluate their species composition and whether...
Liu, Ming-Qi; Zeng, Wen-Feng; Fang, Pan; Cao, Wei-Qian; Liu, Chao; Yan, Guo-Quan; Zhang, Yang; Peng, Chao; Wu, Jian-Qiang; Zhang, Xiao-Jin; Tu, Hui-Jun; Chi, Hao; Sun, Rui-Xiang; Cao, Yong; Dong, Meng-Qiu; Jiang, Bi-Yun; Huang, Jiang-Ming; Shen, Hua-Li; Wong, Catherine C L; He, Si-Min; Yang, Peng-Yuan
2017-09-05
The precise and large-scale identification of intact glycopeptides is a critical step in glycoproteomics. Owing to the complexity of glycosylation, the current overall throughput, data quality and accessibility of intact glycopeptide identification lack behind those in routine proteomic analyses. Here, we propose a workflow for the precise high-throughput identification of intact N-glycopeptides at the proteome scale using stepped-energy fragmentation and a dedicated search engine. pGlyco 2.0 conducts comprehensive quality control including false discovery rate evaluation at all three levels of matches to glycans, peptides and glycopeptides, improving the current level of accuracy of intact glycopeptide identification. The N-glycoproteome of samples metabolically labeled with 15 N/ 13 C were analyzed quantitatively and utilized to validate the glycopeptide identification, which could be used as a novel benchmark pipeline to compare different search engines. Finally, we report a large-scale glycoproteome dataset consisting of 10,009 distinct site-specific N-glycans on 1988 glycosylation sites from 955 glycoproteins in five mouse tissues.Protein glycosylation is a heterogeneous post-translational modification that generates greater proteomic diversity that is difficult to analyze. Here the authors describe pGlyco 2.0, a workflow for the precise one step identification of intact N-glycopeptides at the proteome scale.
Analyzing CMOS/SOS fabrication for LSI arrays
NASA Technical Reports Server (NTRS)
Ipri, A. C.
1978-01-01
Report discusses set of design rules that have been developed as result of work with test arrays. Set of optimum dimensions is given that would maximize process output and would correspondingly minimize costs in fabrication of large-scale integration (LSI) arrays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choo, Jaegul; Kim, Hannah; Clarkson, Edward
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
Choo, Jaegul; Kim, Hannah; Clarkson, Edward; ...
2018-01-31
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Labat, David; Jourde, Hervé; Lecoq, Nicolas; Mazzilli, Naomi
2017-04-01
The french karst observatory network SNO KARST is a national initiative from the National Institute for Earth Sciences and Astronomy (INSU) of the National Center for Scientific Research (CNRS). It is also part of the new french research infrastructure for the observation of the critical zone OZCAR. SNO KARST is composed by several karst sites distributed over conterminous France which are located in different physiographic and climatic contexts (Mediterranean, Pyrenean, Jura mountain, western and northwestern shore near the Atlantic or the English Channel). This allows the scientific community to develop advanced research and experiments dedicated to improve understanding of the hydrological functioning of karst catchments. Here we used several sites of SNO KARST in order to assess the hydrological response of karst catchments to long-term variation of large-scale atmospheric circulation. Using NCEP reanalysis products and karst discharge, we analyzed the links between large-scale circulation and karst water resources variability. As karst hydrosystems are highly heterogeneous media, they behave differently across different time-scales : we explore the large-scale/local-scale relationships according to time-scales using a wavelet multiresolution approach of both karst hydrological variables and large-scale climate fields such as sea level pressure (SLP). The different wavelet components of karst discharge in response to the corresponding wavelet component of climate fields are either 1) compared to physico-chemical/geochemical responses at karst springs, or 2) interpreted in terms of hydrological functioning by comparing discharge wavelet components to internal components obtained from precipitation/discharge models using the KARSTMOD conceptual modeling platform of SNO KARST.
Zaehringer, Julie G; Wambugu, Grace; Kiteme, Boniface; Eckert, Sandra
2018-05-01
Africa has been heavily targeted by large-scale agricultural investments (LAIs) throughout the last decade, with scarcely known impacts on local social-ecological systems. In Kenya, a large number of LAIs were made in the region northwest of Mount Kenya. These large-scale farms produce vegetables and flowers mainly for European markets. However, land use in the region remains dominated by small-scale crop and livestock farms with less than 1 ha of land each, who produce both for their own subsistence and for the local markets. We interviewed 100 small-scale farmers living near five different LAIs to elicit their perceptions of the impacts that these LAIs have on their land use and the overall environment. Furthermore, we analyzed remotely sensed land cover and land use data to assess land use change in the vicinity of the five LAIs. While land use change did not follow a clear trend, a number of small-scale farmers did adapt their crop management to environmental changes such as a reduced river water flows and increased pests, which they attributed to the presence of LAIs. Despite the high number of open conflicts between small-scale land users and LAIs around the issue of river water abstraction, the main environmental impact, felt by almost half of the interviewed land users, was air pollution with agrochemicals sprayed on the LAIs' land. Even though only a low percentage of local land users and their household members were directly involved with LAIs, a large majority of respondents favored the presence of LAIs nearby, as they are believed to contribute to the region's overall economic development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Large Scale Integrated Circuits for Military Applications.
1977-05-01
economic incentive for riarrowing this gap is examined, y (U)^wo"categories of cost are analyzed: the direct life cycle cost of the integrated circuit...dependence of these costs on the physical charac- teristics of the integrated circuits is discussed. (U) The economic and physical characteristics of... economic incentive for narrowing this gap is examined. Two categories of cost are analyzed: the direct life cycle cost of the integrated circuit
Analysis on economic carrying capacity index of pig breeding in China
NASA Astrophysics Data System (ADS)
Leng, Bi-Bin; Liu, Jia-Ling; Xu, Yue-Feng
2017-08-01
In this paper, factor analysis method was employed to analyze and calculate the Gross Domestic Product (GDP) per capita in the last decade, the proportion of research and experiment development (R&D) expenditure equivalent to GDP, urban and rural residents’ pork consumption and explored the scale of Chinese pig breeding on economic carrying capacity index. The result showed that the growth of GDP had led to better techniques and higher field investment, and stronger support like science and technology from the government provided good conditions for large scale of pig breeding. Besides, the substantial increase of pork consumption between rural and urban residents has contributed to the pig breeding in large scale. As a result, the economic carrying capacity index in Chinese pig farming is on the rise.
Lai, Hsien-Tang; Kung, Pei-Tseng; Su, Hsun-Pi; Tsai, Wen-Chen
2014-09-01
Limited studies with large samples have been conducted on the utilization of dental calculus scaling among people with physical or mental disabilities. This study aimed to investigate the utilization of dental calculus scaling among the national disabled population. This study analyzed the utilization of dental calculus scaling among the disabled people, using the nationwide data between 2006 and 2008. Descriptive analysis and logistic regression were performed to analyze related influential factors for dental calculus scaling utilization. The dental calculus scaling utilization rate among people with physical or mental disabilities was 16.39%, and the annual utilization frequency was 0.2 times. Utilization rate was higher among the female and non-aboriginal samples. Utilization rate decreased with increased age and disability severity while utilization rate increased with income, education level, urbanization of residential area and number of chronic illnesses. Related influential factors for dental calculus scaling utilization rate were gender, age, ethnicity (aboriginal or non-aboriginal), education level, urbanization of residence area, income, catastrophic illnesses, chronic illnesses, disability types, and disability severity significantly influenced the dental calculus scaling utilization rate. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mapping spatial patterns of denitrifiers at large scales (Invited)
NASA Astrophysics Data System (ADS)
Philippot, L.; Ramette, A.; Saby, N.; Bru, D.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.
2010-12-01
Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 739 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.
Cosmological measurements with general relativistic galaxy correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Montanari, Francesco; Durrer, Ruth
We investigate the cosmological dependence and the constraining power of large-scale galaxy correlations, including all redshift-distortions, wide-angle, lensing and gravitational potential effects on linear scales. We analyze the cosmological information present in the lensing convergence and in the gravitational potential terms describing the so-called ''relativistic effects'', and we find that, while smaller than the information contained in intrinsic galaxy clustering, it is not negligible. We investigate how neglecting them does bias cosmological measurements performed by future spectroscopic and photometric large-scale surveys such as SKA and Euclid. We perform a Fisher analysis using the CLASS code, modified to include scale-dependent galaxymore » bias and redshift-dependent magnification and evolution bias. Our results show that neglecting relativistic terms, especially lensing convergence, introduces an error in the forecasted precision in measuring cosmological parameters of the order of a few tens of percent, in particular when measuring the matter content of the Universe and primordial non-Gaussianity parameters. The analysis suggests a possible substantial systematic error in cosmological parameter constraints. Therefore, we argue that radial correlations and integrated relativistic terms need to be taken into account when forecasting the constraining power of future large-scale number counts of galaxy surveys.« less
Characterization of spray-induced turbulence using fluorescence PIV
NASA Astrophysics Data System (ADS)
van der Voort, Dennis D.; Dam, Nico J.; Clercx, Herman J. H.; Water, Willem van de
2018-07-01
The strong shear induced by the injection of liquid sprays at high velocities induces turbulence in the surrounding medium. This, in turn, influences the motion of droplets as well as the mixing of air and vapor. Using fluorescence-based tracer particle image velocimetry, the velocity field surrounding 125-135 m/s sprays exiting a 200-μm nozzle is analyzed. For the first time, the small- and large-scale turbulence characteristics of the gas phase surrounding a spray has been measured simultaneously, using a large eddy model to determine the sub-grid scales. This further allows the calculation of the Stokes numbers of droplets, which indicates the influence of turbulence on their motion. The measurements lead to an estimate of the dissipation rate ɛ ≈ 35 m2 s^{-3}, a microscale Reynolds number Re_{λ } ≈ 170, and a Kolmogorov length scale of η ≈ 10^{-4} m. Using these dissipation rates to convert a droplet size distribution to a distribution of Stokes numbers, we show that only the large scale motion of turbulence disperses the droplet in the current case, but the small scales will grow in importance with increasing levels of atomization and ambient pressures.
Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R
2012-01-01
In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
Epigenetic supersimilarity of monozygotic twin pairs
USDA-ARS?s Scientific Manuscript database
Monozygotic twins have long been studied to estimate heritability and explore epigenetic influences on phenotypic variation. The phenotypic and epigenetic similarities of monozygotic twins have been assumed to be largely due to their genetic identity. Here, by analyzing data from a genome-scale stud...
Investigation of low-latitude hydrogen emission in terms of a two-component interstellar gas model
NASA Technical Reports Server (NTRS)
Baker, P. L.; Burton, W. B.
1975-01-01
High-resolution 21-cm hydrogen line observations at low galactic latitude are analyzed to determine the large-scale distribution of galactic hydrogen. Distribution parameters are found by model fitting, optical depth effects are computed using a two-component gas model suggested by the observations, and calculations are made for a one-component uniform spin-temperature gas model to show the systematic departures between this model and data obtained by incorrect treatment of the optical depth effects. Synthetic 21-cm line profiles are computed from the two-component model, and the large-scale trends of the observed emission profiles are reproduced together with the magnitude of the small-scale emission irregularities. Values are determined for the thickness of the galactic hydrogen disk between half density points, the total observed neutral hydrogen mass of the galaxy, and the central number density of the intercloud hydrogen atoms. It is shown that typical hydrogen clouds must be between 1 and 13 pc in diameter and that optical thinness exists on large-scale despite the presence of optically thin gas.
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G
2017-04-07
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.
2017-01-01
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351
Wu, Yubao; Zhu, Xiaofeng; Chen, Jian; Zhang, Xiang
2013-11-01
Epistasis (gene-gene interaction) detection in large-scale genetic association studies has recently drawn extensive research interests as many complex traits are likely caused by the joint effect of multiple genetic factors. The large number of possible interactions poses both statistical and computational challenges. A variety of approaches have been developed to address the analytical challenges in epistatic interaction detection. These methods usually output the identified genetic interactions and store them in flat file formats. It is highly desirable to develop an effective visualization tool to further investigate the detected interactions and unravel hidden interaction patterns. We have developed EINVis, a novel visualization tool that is specifically designed to analyze and explore genetic interactions. EINVis displays interactions among genetic markers as a network. It utilizes a circular layout (specially, a tree ring view) to simultaneously visualize the hierarchical interactions between single nucleotide polymorphisms (SNPs), genes, and chromosomes, and the network structure formed by these interactions. Using EINVis, the user can distinguish marginal effects from interactions, track interactions involving more than two markers, visualize interactions at different levels, and detect proxy SNPs based on linkage disequilibrium. EINVis is an effective and user-friendly free visualization tool for analyzing and exploring genetic interactions. It is publicly available with detailed documentation and online tutorial on the web at http://filer.case.edu/yxw407/einvis/. © 2013 WILEY PERIODICALS, INC.
Xie, Shaocheng; Klein, Stephen A.; Zhang, Minghua; ...
2006-10-05
[1] This study represents an effort to develop Single-Column Model (SCM) and Cloud-Resolving Model large-scale forcing data from a sounding array in the high latitudes. An objective variational analysis approach is used to process data collected from the Atmospheric Radiation Measurement Program (ARM) Mixed-Phase Arctic Cloud Experiment (M-PACE), which was conducted over the North Slope of Alaska in October 2004. In this method the observed surface and top of atmosphere measurements are used as constraints to adjust the sounding data from M-PACE in order to conserve column-integrated mass, heat, moisture, and momentum. Several important technical and scientific issues related tomore » the data analysis are discussed. It is shown that the analyzed data reasonably describe the dynamic and thermodynamic features of the Arctic cloud systems observed during M-PACE. Uncertainties in the analyzed forcing fields are roughly estimated by examining the sensitivity of those fields to uncertainties in the upper-air data and surface constraints that are used in the analysis. Impacts of the uncertainties in the analyzed forcing data on SCM simulations are discussed. Results from the SCM tests indicate that the bulk features of the observed Arctic cloud systems can be captured qualitatively well using the forcing data derived in this study, and major model errors can be detected despite the uncertainties that exist in the forcing data as illustrated by the sensitivity tests. Lastly, the possibility of using the European Center for Medium-Range Weather Forecasts analysis data to derive the large-scale forcing over the Arctic region is explored.« less
A survey on routing protocols for large-scale wireless sensor networks.
Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong
2011-01-01
With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed.
A spatial picture of the synthetic large-scale motion from dynamic roughness
NASA Astrophysics Data System (ADS)
Huynh, David; McKeon, Beverley
2017-11-01
Jacobi and McKeon (2011) set up a dynamic roughness apparatus to excite a synthetic, travelling wave-like disturbance in a wind tunnel, boundary layer study. In the present work, this dynamic roughness has been adapted for a flat-plate, turbulent boundary layer experiment in a water tunnel. A key advantage of operating in water as opposed to air is the longer flow timescales. This makes accessible higher non-dimensional actuation frequencies and correspondingly shorter synthetic length scales, and is thus more amenable to particle image velocimetry. As a result, this experiment provides a novel spatial picture of the synthetic mode, the coupled small scales, and their streamwise development. It is demonstrated that varying the roughness actuation frequency allows for significant tuning of the streamwise wavelength of the synthetic mode, with a range of 3 δ-13 δ being achieved. Employing a phase-locked decomposition, spatial snapshots are constructed of the synthetic large scale and used to analyze its streamwise behavior. Direct spatial filtering is used to separate the synthetic large scale and the related small scales, and the results are compared to those obtained by temporal filtering that invokes Taylor's hypothesis. The support of AFOSR (Grant # FA9550-16-1-0361) is gratefully acknowledged.
Xiangyang Zhou; Shankar Mahalingam; David Weise
2007-01-01
This paper presents a combined study of laboratory scale fire spread experiments and a three-dimensional large eddy simulation (LES) to analyze the effect of terrain slope on marginal burning behavior in live chaparral shrub fuel beds. Line fire was initiated in single species fuel beds of four common chaparral plants under various fuel bed configurations and ambient...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhongming; Liu, Heping; Katul, Gabriel G.
It is now accepted that large-scale turbulent eddies impact the widely reported non-closure of the surface energy balance when latent and sensible heat fluxes are measured using the eddy covariance method in the atmospheric surface layer (ASL). However, a mechanistic link between large eddies and non-closure of the surface energy balance remains a subject of inquiry. Here, measured 10 Hz time series of vertical velocity, air temperature, and water vapor density collected in the ASL are analyzed for conditions where entrainment and/or horizontal advection separately predominate. The series are decomposed into small- and large- eddies based on a frequency cutoffmore » and their contributions to turbulent fluxes are analyzed. Phase difference between vertical velocity and water vapor density associated with large eddies reduces latent heat fluxes, especially in conditions where advection prevails. Furthermore, enlarged phase difference of large eddies linked to entrainment or advection occurrence leads to increased residuals of the surface energy balance.« less
Gao, Zhongming; Liu, Heping; Katul, Gabriel G.; ...
2017-03-16
It is now accepted that large-scale turbulent eddies impact the widely reported non-closure of the surface energy balance when latent and sensible heat fluxes are measured using the eddy covariance method in the atmospheric surface layer (ASL). However, a mechanistic link between large eddies and non-closure of the surface energy balance remains a subject of inquiry. Here, measured 10 Hz time series of vertical velocity, air temperature, and water vapor density collected in the ASL are analyzed for conditions where entrainment and/or horizontal advection separately predominate. The series are decomposed into small- and large- eddies based on a frequency cutoffmore » and their contributions to turbulent fluxes are analyzed. Phase difference between vertical velocity and water vapor density associated with large eddies reduces latent heat fluxes, especially in conditions where advection prevails. Furthermore, enlarged phase difference of large eddies linked to entrainment or advection occurrence leads to increased residuals of the surface energy balance.« less
Large-scale shell-model calculation with core excitations for neutron-rich nuclei beyond 132Sn
NASA Astrophysics Data System (ADS)
Jin, Hua; Hasegawa, Munetake; Tazaki, Shigeru; Kaneko, Kazunari; Sun, Yang
2011-10-01
The structure of neutron-rich nuclei with a few nucleons beyond 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including neutron core excitations, a new effective interaction is determined by employing the extended pairing-plus-quadrupole model with monopole corrections. The model provides a systematical description for energy levels of A=133-135 nuclei up to high spins and reproduces available data of electromagnetic transitions. The structure of these nuclei is analyzed in detail, with emphasis of effects associated with core excitations. The results show evidence of hexadecupole correlation in addition to octupole correlation in this mass region. The suggested feature of magnetic rotation in 135Te occurs in the present shell-model calculation.
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang; ...
2017-04-20
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed in this paper. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humiditymore » differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Finally, clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.« less
NASA Technical Reports Server (NTRS)
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
NASA Astrophysics Data System (ADS)
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang; Lu, Chunsong
2017-09-01
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humidity differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed in this paper. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humiditymore » differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Finally, clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.« less
Energy Spectral Behaviors of Communication Networks of Open-Source Communities
Yang, Jianmei; Yang, Huijie; Liao, Hao; Wang, Jiangtao; Zeng, Jinqun
2015-01-01
Large-scale online collaborative production activities in open-source communities must be accompanied by large-scale communication activities. Nowadays, the production activities of open-source communities, especially their communication activities, have been more and more concerned. Take CodePlex C # community for example, this paper constructs the complex network models of 12 periods of communication structures of the community based on real data; then discusses the basic concepts of quantum mapping of complex networks, and points out that the purpose of the mapping is to study the structures of complex networks according to the idea of quantum mechanism in studying the structures of large molecules; finally, according to this idea, analyzes and compares the fractal features of the spectra in different quantum mappings of the networks, and concludes that there are multiple self-similarity and criticality in the communication structures of the community. In addition, this paper discusses the insights and application conditions of different quantum mappings in revealing the characteristics of the structures. The proposed quantum mapping method can also be applied to the structural studies of other large-scale organizations. PMID:26047331
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
A note on the comparative turbidity of some estuaries of the Americas
Uncles, R.J.; Smith, R.E.
2005-01-01
Field data from 27 estuaries of the Americas are used to show that, in broad terms, there is a large difference in turbidity between the analyzed east and west-coast estuaries and that tidal range and tidal length have an important influence on that turbidity. Generic, numerical sediment-transport modeling is used to illustrate this influence, which exists over a range of space scales from, e.g., the Rogue River Estuary (few km, few mg l-1) to the Bay of Fundy (hundreds of km, few g l-1). The difference in Pacific and Atlantic seaboard estuarine turbidity for the analyzed estuaries is ultimately related to the broad-scale geomorphology of the two continents.
A priori analysis of differential diffusion for model development for scale-resolving simulations
NASA Astrophysics Data System (ADS)
Hunger, Franziska; Dietzsch, Felix; Gauding, Michael; Hasse, Christian
2018-01-01
The present study analyzes differential diffusion and the mechanisms responsible for it with regard to the turbulent/nonturbulent interface (TNTI) with special focus on model development for scale-resolving simulations. In order to analyze differences between resolved and subfilter phenomena, direct numerical simulation (DNS) data are compared with explicitly filtered data. The DNS database stems from a temporally evolving turbulent plane jet transporting two passive scalars with Schmidt numbers of unity and 0.25 presented by Hunger et al. [F. Hunger et al., J. Fluid Mech. 802, R5 (2016), 10.1017/jfm.2016.471]. The objective of this research is twofold: (i) to compare the position of the turbulent-nonturbulent interface between the original DNS data and the filtered data and (ii) to analyze differential diffusion and the impact of the TNTI with regard to scale resolution in the filtered DNS data. For the latter, differential diffusion quantities are studied, clearly showing the decrease of differential diffusion at the resolved scales with increasing filter width. A transport equation for the scalar differences is evaluated. Finally, the existence of large scalar gradients, gradient alignment, and the diffusive fluxes being the physical mechanisms responsible for the separation of the two scalars are compared between the resolved and subfilter scales.
The massive fermion phase for the U(N) Chern-Simons gauge theory in D=3 at large N
Bardeen, William A.
2014-10-07
We explore the phase structure of fermions in the U(N) Chern-Simons Gauge theory in three dimensions using the large N limit where N is the number of colors and the fermions are taken to be in the fundamental representation of the U(N) gauge group. In the large N limit, the theory retains its classical conformal behavior and considerable attention has been paid to possible AdS/CFT dualities of the theory in the conformal phase. In this paper we present a solution for the massive phase of the fermion theory that is exact to the leading order of ‘t Hooft’s large Nmore » expansion. We present evidence for the spontaneous breaking of the exact scale symmetry and analyze the properties of the dilaton that appears as the Goldstone boson of scale symmetry breaking.« less
Template Interfaces for Agile Parallel Data-Intensive Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.
Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less
Boatwright, J.; Bundock, H.; Luetgert, J.; Seekins, L.; Gee, L.; Lombard, P.
2003-01-01
We analyze peak ground velocity (PGV) and peak ground acceleration (PGA) data from 95 moderate (3.5 ??? M 100 km, the peak motions attenuate more rapidly than a simple power law (that is, r-??) can fit. Instead, we use an attenuation function that combines a fixed power law (r-0.7) with a fitted exponential dependence on distance, which is estimated as expt(-0.0063r) and exp(-0.0073r) for PGV and PGA, respectively, for moderate earthquakes. We regress log(PGV) and log(PGA) as functions of distance and magnitude. We assume that the scaling of log(PGV) and log(PGA) with magnitude can differ for moderate and large earthquakes, but must be continuous. Because the frequencies that carry PGV and PGA can vary with earthquake size for large earthquakes, the regression for large earthquakes incorporates a magnitude dependence in the exponential attenuation function. We fix the scaling break between moderate and large earthquakes at M 5.5; log(PGV) and log(PGA) scale as 1.06M and 1.00M, respectively, for moderate earthquakes and 0.58M and 0.31M for large earthquakes.
Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Sewell, Christopher; Heitmann, Katrin
2015-01-01
Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less
Magnetospheric turbulence and substorm expansion phase onset
NASA Astrophysics Data System (ADS)
Antonova, Elizaveta; Stepanova, Marina; Kirpichev, Igor; Pulinets, Maria; Znatkova, Svetlana; Ovchinnikov, Ilya; Kornilov, Ilya; Kornilova, Tatyana
Magnetosphere of the Earth is formed in the process of turbulent solar wind flow around the obstacle -magnetic field of the Earth. The level of turbulence in the magnetosheath and geo-magnetic tail is very high even during periods of comparatively stable solar wind parameters. Such situation requires checking of the most popular concepts of the nature of magnetospheric activity. Properties of magnetosheath and magnetospheric turbulence are analyzed in connec-tion with the problem of the nature of substorms and localization of substorm onset. The large-scale picture of the plasma velocity fluctuations obtained using data of INTERBALL and Geotail observations is analyzed. It is shown that it is possible to select surrounding the Earth at geocentric distances from 7Re till 10Re plasma ring with comparatively low level of fluctuations. Results of observations demonstrating isolated substorm onset inside this ring are summarized. It is shown that the non-contradictory picture of large-scale magnetospheric convection and substorm dynamics can be obtained taking into account high level of magne-tosheath and magnetospheric turbulence.
Gyulassy, Attila; Knoll, Aaron; Lau, Kah Chun; Wang, Bei; Bremer, Peer-Timo; Papka, Michael E; Curtiss, Larry A; Pascucci, Valerio
2016-01-01
Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermally annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyulassy, Attila; Knoll, Aaron; Lau, Kah Chun
2016-01-01
Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermallymore » annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.« less
Gyulassy, Attila; Knoll, Aaron; Lau, Kah Chun; ...
2016-01-31
Large-scale molecular dynamics (MD) simulations are commonly used for simulating the synthesis and ion diffusion of battery materials. A good battery anode material is determined by its capacity to store ion or other diffusers. However, modeling of ion diffusion dynamics and transport properties at large length and long time scales would be impossible with current MD codes. To analyze the fundamental properties of these materials, therefore, we turn to geometric and topological analysis of their structure. In this paper, we apply a novel technique inspired by discrete Morse theory to the Delaunay triangulation of the simulated geometry of a thermallymore » annealed carbon nanosphere. We utilize our computed structures to drive further geometric analysis to extract the interstitial diffusion structure as a single mesh. Lastly, our results provide a new approach to analyze the geometry of the simulated carbon nanosphere, and new insights into the role of carbon defect size and distribution in determining the charge capacity and charge dynamics of these carbon based battery materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Joslyn, Cliff A.; Chappell, Alan R.
As semantic datasets grow to be very large and divergent, there is a need to identify and exploit their inherent semantic structure for discovery and optimization. Towards that end, we present here a novel methodology to identify the semantic structures inherent in an arbitrary semantic graph dataset. We first present the concept of an extant ontology as a statistical description of the semantic relations present amongst the typed entities modeled in the graph. This serves as a model of the underlying semantic structure to aid in discovery and visualization. We then describe a method of ontological scaling in which themore » ontology is employed as a hierarchical scaling filter to infer different resolution levels at which the graph structures are to be viewed or analyzed. We illustrate these methods on three large and publicly available semantic datasets containing more than one billion edges each. Keywords-Semantic Web; Visualization; Ontology; Multi-resolution Data Mining;« less
Exact-Differential Large-Scale Traffic Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios
2015-01-01
Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less
GAIA: A WINDOW TO LARGE-SCALE MOTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nusser, Adi; Branchini, Enzo; Davis, Marc, E-mail: adi@physics.technion.ac.il, E-mail: branchin@fis.uniroma3.it, E-mail: mdavis@berkeley.edu
2012-08-10
Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, becausemore » of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.« less
Methods, caveats and the future of large-scale microelectrode recordings in the non-human primate
Dotson, Nicholas M.; Goodell, Baldwin; Salazar, Rodrigo F.; Hoffman, Steven J.; Gray, Charles M.
2015-01-01
Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field. PMID:26578906
HAPEX-Sahel: A large-scale study of land-atmosphere interactions in the semi-arid tropics
NASA Technical Reports Server (NTRS)
Gutorbe, J-P.; Lebel, T.; Tinga, A.; Bessemoulin, P.; Brouwer, J.; Dolman, A.J.; Engman, E. T.; Gash, J. H. C.; Hoepffner, M.; Kabat, P.
1994-01-01
The Hydrologic Atmospheric Pilot EXperiment in the Sahel (HAPEX-Sahel) was carried out in Niger, West Africa, during 1991-1992, with an intensive observation period (IOP) in August-October 1992. It aims at improving the parameteriztion of land surface atmospheric interactions at the Global Circulation Model (GCM) gridbox scale. The experiment combines remote sensing and ground based measurements with hydrological and meteorological modeling to develop aggregation techniques for use in large scale estimates of the hydrological and meteorological behavior of large areas in the Sahel. The experimental strategy consisted of a period of intensive measurements during the transition period of the rainy to the dry season, backed up by a series of long term measurements in a 1 by 1 deg square in Niger. Three 'supersites' were instrumented with a variety of hydrological and (micro) meteorological equipment to provide detailed information on the surface energy exchange at the local scale. Boundary layer measurements and aircraft measurements were used to provide information at scales of 100-500 sq km. All relevant remote sensing images were obtained for this period. This program of measurements is now being analyzed and an extensive modelling program is under way to aggregate the information at all scales up to the GCM grid box scale. The experimental strategy and some preliminary results of the IOP are described.
NASA Astrophysics Data System (ADS)
Schneider, Christian
2017-04-01
The study analyzes the impact of different farming systems on soil quality and soil degradation in European loess landscapes. The analyses are based on geo-chemical soil properties, landscape metrics and geomorphological indicators. The German Middle Saxonian Loess Region represents loess landscapes whose ecological functions were shaped by land consolidation measures resulting in large-scale high-input farming systems. The Polish Proszowice Plateau is still characterized by a traditional small-scale peasant agriculture. The research areas were analyzed on different scale levels combining GIS, field, and laboratory methods. A digital terrain classification was used to identify representative catchment basins for detailed pedological studies which were focused on soil properties that responded to soil management within several years, like pH-value, total carbon (TC), total nitrogen (TN), inorganic carbon (IC), soil organic carbon (TOC=TC-IC), hot-water extractable carbon (HWC), hot-water extractable nitrogen (HWN), total phosphorus, plant-available phosphorus (P), plant-available potassium (K) and the potential cation exchange capacity (CEC). The study has shown that significant differences in major soil properties can be observed because of different fertilizer inputs and partly because of different cultivation techniques. Also the traditional system increases soil heterogeneity. Contrary to expectations the study has shown that the small-scale peasant farming system resulted in similar mean soil organic carbon and phosphorus contents like the industrialized high-input farming system. A further study could include investigations of the effects of soil amendments like herbicides and pesticide on soil degradation.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
Volis, Sergei; Ormanbekova, Danara; Yermekbayev, Kanat; Song, Minshu; Shulgina, Irina
2015-01-01
Detecting local adaptation and its spatial scale is one of the most important questions of evolutionary biology. However, recognition of the effect of local selection can be challenging when there is considerable environmental variation across the distance at the whole species range. We analyzed patterns of local adaptation in emmer wheat, Triticum dicoccoides, at two spatial scales, small (inter-population distance less than one km) and large (inter-population distance more than 50 km) using several approaches. Plants originating from four distinct habitats at two geographic scales (cold edge, arid edge and two topographically dissimilar core locations) were reciprocally transplanted and their success over time was measured as 1) lifetime fitness in a year of planting, and 2) population growth four years after planting. In addition, we analyzed molecular (SSR) and quantitative trait variation and calculated the QST/FST ratio. No home advantage was detected at the small spatial scale. At the large spatial scale, home advantage was detected for the core population and the cold edge population in the year of introduction via measuring life-time plant performance. However, superior performance of the arid edge population in its own environment was evident only after several generations via measuring experimental population growth rate through genotyping with SSRs allowing counting the number of plants and seeds per introduced genotype per site. These results highlight the importance of multi-generation surveys of population growth rate in local adaptation testing. Despite predominant self-fertilization of T. dicoccoides and the associated high degree of structuring of genetic variation, the results of the QST - FST comparison were in general agreement with the pattern of local adaptation at the two spatial scales detected by reciprocal transplanting.
Robust regression for large-scale neuroimaging studies.
Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand
2015-05-01
Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Design considerations for implementation of large scale automatic meter reading systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mak, S.; Radford, D.
1995-01-01
This paper discusses the requirements imposed on the design of an AMR system expected to serve a large (> 1 million) customer base spread over a large geographical area. Issues such as system throughput response time, and multi-application expendability are addressed, all of which are intimately dependent on the underlying communication system infrastructure, the local geography, the customer base, and the regulatory environment. A methodology for analysis, assessment, and design of large systems is presented. For illustration, two communication systems -- a low power RF/PLC system and a power frequency carrier system -- are analyzed and discussed.
USDA-ARS?s Scientific Manuscript database
Tomato Functional Genomics Database (TFGD; http://ted.bti.cornell.edu) provides a comprehensive systems biology resource to store, mine, analyze, visualize and integrate large-scale tomato functional genomics datasets. The database is expanded from the previously described Tomato Expression Database...
Power Grid Data Analysis with R and Hadoop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Gibson, Tara D.; Kleese van Dam, Kerstin
This book chapter presents an approach to analysis of large-scale time-series sensor information based on our experience with power grid data. We use the R-Hadoop Integrated Programming Environment (RHIPE) to analyze a 2TB data set and present code and results for this analysis.
An Overview of Science Education in the Caribbean: Research, Policy and Practice.
ERIC Educational Resources Information Center
Sweeney, Aldrin E.
2003-01-01
Analyzes science education in the Caribbean and provides examples of science education policy and practice. Emphasizes large-scale national efforts in Barbados, Bermuda, and Jamaica. Discusses and provides recommendations for future directions in science education in these countries. (Contains 88 references.) (Author/NB)
Part I: The Evidence Cycles of Extinction.
ERIC Educational Resources Information Center
Brownlee, Shannon
1984-01-01
Discusses a theory suggesting that large-scale extinctions of marine animal families occur in cycles of 26 million years. Research methodology consisted of analyzing and charting fossil records showing the decline and disappearance of these animals over the past 250 million years. Other theories are considered. (BC)
A roadmap for natural product discovery based on large-scale genomics and metabolomics
USDA-ARS?s Scientific Manuscript database
Actinobacteria encode a wealth of natural product biosynthetic gene clusters, whose systematic study is complicated by numerous repetitive motifs. By combining several metrics we developed a method for global classification of these gene clusters into families (GCFs) and analyzed the biosynthetic ca...
NASA Astrophysics Data System (ADS)
Lares, M.; Luparello, H. E.; Garcia Lambas, D.; Ruiz, A. N.; Ceccarelli, L.; Paz, D.
2017-10-01
Cosmic voids are of great interest given their relation to the large scale distribution of mass and the way they trace cosmic flows shaping the cosmic web. Here we show that the distribution of voids has, in consonance with the distribution of mass, a characteristic scale at which void pairs are preferentially located. We identify clumps of voids with similar environments and use them to define second order underdensities. Also, we characterize its properties and analyze its impact on the cosmic microwave background.
2008-08-01
Administration NDBA N-nitrosodi-n-butylamine NDEA N-nitrosodiethylamine NDMA N-nitrosodimethylamine NDPA N-nitrosodi-n-propylamine v ACRONYMS...spectrometry (IC-MS/MS). Nitrosamines were analyzed using EPA Method 521. N-nitrosodimethylamine ( NDMA ) was 2.6 parts per trillion (ppt) with a detection...and metals (Ca, Cu, Fe, Mg, Mn, K, Na , and Zn). Specific methods are listed in Table 5. ** N-nitrosodimethylamine ( NDMA ), N-nitrosodiethylamine
Multiresolution persistent homology for excessively large biomolecular datasets
NASA Astrophysics Data System (ADS)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-10-01
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.
NASA Astrophysics Data System (ADS)
Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei
2017-10-01
In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.
A new method of presentation the large-scale magnetic field structure on the Sun and solar corona
NASA Technical Reports Server (NTRS)
Ponyavin, D. I.
1995-01-01
The large-scale photospheric magnetic field, measured at Stanford, has been analyzed in terms of surface harmonics. Changes of the photospheric field which occur within whole solar rotation period can be resolved by this analysis. For this reason we used daily magnetograms of the line-of-sight magnetic field component observed from Earth over solar disc. We have estimated the period during which day-to-day full disc magnetograms must be collected. An original algorithm was applied to resolve time variations of spherical harmonics that reflect time evolution of large-scale magnetic field within solar rotation period. This method of magnetic field presentation can be useful enough in lack of direct magnetograph observations due to sometimes bad weather conditions. We have used the calculated surface harmonics to reconstruct the large-scale magnetic field structure on the source surface near the sun - the origin of heliospheric current sheet and solar wind streams. The obtained results have been compared with spacecraft in situ observations and geomagnetic activity. We tried to show that proposed technique can trace shon-time variations of heliospheric current sheet and short-lived solar wind streams. We have compared also our results with those obtained traditionally from potential field approximation and extrapolation using synoptic charts as initial boundary conditions.
Large Scale eHealth Deployment in Europe: Insights from Concurrent Use of Standards.
Eichelberg, Marco; Chronaki, Catherine
2016-01-01
Large-scale eHealth deployment projects face a major challenge when called to select the right set of standards and tools to achieve sustainable interoperability in an ecosystem including both legacy systems and new systems reflecting technological trends and progress. There is not a single standard that would cover all needs of an eHealth project, and there is a multitude of overlapping and perhaps competing standards that can be employed to define document formats, terminology, communication protocols mirroring alternative technical approaches and schools of thought. eHealth projects need to respond to the important question of how alternative or inconsistently implemented standards and specifications can be used to ensure practical interoperability and long-term sustainability in large scale eHealth deployment. In the eStandards project, 19 European case studies reporting from R&D and large-scale eHealth deployment and policy projects were analyzed. Although this study is not exhaustive, reflecting on the concepts, standards, and tools for concurrent use and the successes, failures, and lessons learned, this paper offers practical insights on how eHealth deployment projects can make the most of the available eHealth standards and tools and how standards and profile developing organizations can serve the users embracing sustainability and technical innovation.
NASA Technical Reports Server (NTRS)
Riley, Peter
2000-01-01
This investigation is concerned with the large-scale evolution and topology of coronal mass ejections (CMEs) in the solar wind. During this reporting period we have focused on several aspects of CME properties, their identification and their evolution in the solar wind. The work included both analysis of Ulysses and ACE observations as well as fluid and magnetohydrodynamic simulations. In addition, we analyzed a series of "density holes" observed in the solar wind, that bear many similarities with CMEs. Finally, this work was communicated to the scientific community at three meetings and has led to three scientific papers that are in various stages of review.
Promoting Handwashing Behavior: The Effects of Large-scale Community and School-level Interventions.
Galiani, Sebastian; Gertler, Paul; Ajzenman, Nicolas; Orsola-Vidal, Alexandra
2016-12-01
This paper analyzes a randomized experiment that uses novel strategies to promote handwashing with soap at critical points in time in Peru. It evaluates a large-scale comprehensive initiative that involved both community and school activities in addition to communication campaigns. The analysis indicates that the initiative was successful in reaching the target audience and in increasing the treated population's knowledge about appropriate handwashing behavior. These improvements translated into higher self-reported and observed handwashing with soap at critical junctures. However, no significant improvements in the health of children under the age of 5 years were observed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Research on unit commitment with large-scale wind power connected power system
NASA Astrophysics Data System (ADS)
Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing
2017-01-01
Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.
Algorithm of OMA for large-scale orthology inference
Roth, Alexander CJ; Gonnet, Gaston H; Dessimoz, Christophe
2008-01-01
Background OMA is a project that aims to identify orthologs within publicly available, complete genomes. With 657 genomes analyzed to date, OMA is one of the largest projects of its kind. Results The algorithm of OMA improves upon standard bidirectional best-hit approach in several respects: it uses evolutionary distances instead of scores, considers distance inference uncertainty, includes many-to-many orthologous relations, and accounts for differential gene losses. Herein, we describe in detail the algorithm for inference of orthology and provide the rationale for parameter selection through multiple tests. Conclusion OMA contains several novel improvement ideas for orthology inference and provides a unique dataset of large-scale orthology assignments. PMID:19055798
Multi-scale approaches for high-speed imaging and analysis of large neural populations
Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam
2017-01-01
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570
Jafari, G Reza; Sahimi, Muhammad; Rasaei, M Reza; Tabar, M Reza Rahimi
2011-02-01
Several methods have been developed in the past for analyzing the porosity and other types of well logs for large-scale porous media, such as oil reservoirs, as well as their permeability distributions. We developed a method for analyzing the porosity logs ϕ(h) (where h is the depth) and similar data that are often nonstationary stochastic series. In this method one first generates a new stationary series based on the original data, and then analyzes the resulting series. It is shown that the series based on the successive increments of the log y(h)=ϕ(h+δh)-ϕ(h) is a stationary and Markov process, characterized by a Markov length scale h(M). The coefficients of the Kramers-Moyal expansion for the conditional probability density function (PDF) P(y,h|y(0),h(0)) are then computed. The resulting PDFs satisfy a Fokker-Planck (FP) equation, which is equivalent to a Langevin equation for y(h) that provides probabilistic predictions for the porosity logs. We also show that the Hurst exponent H of the self-affine distributions, which have been used in the past to describe the porosity logs, is directly linked to the drift and diffusion coefficients that we compute for the FP equation. Also computed are the level-crossing probabilities that provide insight into identifying the high or low values of the porosity beyond the depth interval in which the data have been measured. ©2011 American Physical Society
Jones, Casey A; Daehler, Curtis C
2018-01-01
Studies in plant phenology have provided some of the best evidence for large-scale responses to recent climate change. Over the last decade, more than thirty studies have used herbarium specimens to analyze changes in flowering phenology over time, although studies from tropical environments are thus far generally lacking. In this review, we summarize the approaches and applications used to date. Reproductive plant phenology has primarily been analyzed using two summary statistics, the mean flowering day of year and first-flowering day of year, but mean flowering day has proven to be a more robust statistic. Two types of regression models have been applied to test for associations between flowering, temperature and time: flowering day regressed on year and flowering day regressed on temperature. Most studies analyzed the effect of temperature by averaging temperatures from three months prior to the date of flowering. On average, published studies have used 55 herbarium specimens per species to characterize changes in phenology over time, but in many cases fewer specimens were used. Geospatial grid data are increasingly being used for determining average temperatures at herbarium specimen collection locations, allowing testing for finer scale correspondence between phenology and climate. Multiple studies have shown that inferences from herbarium specimen data are comparable to findings from systematically collected field observations. Understanding phenological responses to climate change is a crucial step towards recognizing implications for higher trophic levels and large-scale ecosystem processes. As herbaria are increasingly being digitized worldwide, more data are becoming available for future studies. As temperatures continue to rise globally, herbarium specimens are expected to become an increasingly important resource for analyzing plant responses to climate change.
A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks
Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong
2011-01-01
With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed. PMID:22163808
De-Identification in Learning Analytics
ERIC Educational Resources Information Center
Khalila, Mohammad; Ebner, Martin
2016-01-01
Learning analytics has reserved its position as an important field in the educational sector. However, the large-scale collection, processing, and analyzing of data has steered the wheel beyond the borders to face an abundance of ethical breaches and constraints. Revealing learners' personal information and attitudes, as well as their activities,…
Educating Preservice Teachers: The State of Affairs.
ERIC Educational Resources Information Center
Young, Edyth E.; Grant, Peggy A.; Montbriand, Cathy; Therriault, David J.
This paper examines reading issues and strategies for the 21st century, analyzing interventions that could become models for ensuring quality and alignment in preservice teacher education and reviewing the "Gap Analysis of Preservice and Inservice Teacher Training of Reading Instruction: Large-Scale Survey Study." It also synthesizes findings from…
Evaluating Large-Scale Studies to Accurately Appraise Children's Performance
ERIC Educational Resources Information Center
Ernest, James M.
2012-01-01
Educational policy is often developed using a top-down approach. Recently, there has been a concerted shift in policy for educators to develop programs and research proposals that evolve from "scientific" studies and focus less on their intuition, aided by professional wisdom. This article analyzes several national and international…
USDA-ARS?s Scientific Manuscript database
While hydrotreated renewable jet fuel (HRJ) has been demonstrated for use in commercial and military aviation, a challenge to large-scale adoption is availability of cost competitive feedstocks. Brassica oilseed crops like Brassica napus, B. rapa, B. juncea, B. carinata, Sinapis alba, and Camelina s...
DOT National Transportation Integrated Search
1999-12-01
This paper analyzes the freight demand characteristics that drive modal choice by means of a large scale, national, disaggregate revealed preference database for shippers in France in 1988, using a nested logit. Particular attention is given to priva...
Grossman, Robert L.; Heath, Allison; Murphy, Mark; Patterson, Maria; Wells, Walt
2017-01-01
Data commons collocate data, storage, and computing infrastructure with core services and commonly used tools and applications for managing, analyzing, and sharing data to create an interoperable resource for the research community. An architecture for data commons is described, as well as some lessons learned from operating several large-scale data commons. PMID:29033693
Truancy Offenders in the Juvenile Justice System: A Multicohort Study
ERIC Educational Resources Information Center
Zhang, Dalun; Willson, Victor; Katsiyannis, Antonis; Barrett, David; Ju, Song; Wu, Jiun-Yu
2010-01-01
Truancy remains a persistent concern, with serious consequences for the individual, family, and society, as truancy is often linked to academic failure, disengagement with school, school dropout, and delinquency. This study analyzed large-scale data covering multiple years of cohorts of delinquent youths born between 1981 and 1988. Truancy…
To the Cloud! A Grassroots Proposal to Accelerate Brain Science Discovery
Vogelstein, Joshua T.; Mensh, Brett; Hausser, Michael; Spruston, Nelson; Evans, Alan; Kording, Konrad; Amunts, Katrin; Ebell, Christoph; Muller, Jeff; Telefont, Martin; Hill, Sean; Koushika, Sandhya P.; Cali, Corrado; Valdés-Sosa, Pedro Antonio; Littlewood, Peter; Koch, Christof; Saalfeld, Stephan; Kepecs, Adam; Peng, Hanchuan; Halchenko, Yaroslav O.; Kiar, Gregory; Poo, Mu-Ming; Poline, Jean-Baptiste; Milham, Michael P.; Schaffer, Alyssa Picchini; Gidron, Rafi; Okano, Hideyuki; Calhoun, Vince D; Chun, Miyoung; Kleissas, Dean M.; Vogelstein, R. Jacob; Perlman, Eric; Burns, Randal; Huganir, Richard; Miller, Michael I.
2018-01-01
The revolution in neuroscientific data acquisition is creating an analysis challenge. We propose leveraging cloud-computing technologies to enable large-scale neurodata storing, exploring, analyzing, and modeling. This utility will empower scientists globally to generate and test theories of brain function and dysfunction. PMID:27810005
Market scenarios and alternative administrative frameworks for US educational satellite systems
NASA Technical Reports Server (NTRS)
Walkmeyer, J. E., Jr.; Morgan, R. P.; Singh, J. P.
1975-01-01
Costs and benefits of developing an operational educational satellite system in the U.S. are analyzed. Scenarios are developed for each educational submarket and satellite channel and ground terminal requirements for a large-scale educational telecommunications system are estimated. Alternative organizational frameworks for such a system are described.
USDA-ARS?s Scientific Manuscript database
Micropropagation of Psidium guajava L. (guava) is a viable alternative to currently adopted techniques for large-scale plant propagation of commercial cultivars. Assessment of clonal fidelity in micropropagated plants is the first step towards ensuring genetic uniformity in mass production of planti...
Motivation to Read among Rural Adolescents
ERIC Educational Resources Information Center
Belken, Gloria
2013-01-01
This study used quantitative methods to investigate motivation to read among high school students in a tenth-grade English course at a rural high school in the Midwestern USA. Data were collected and analyzed to replicate previous studies. In this study, when compared to large-scale surveys, respondents showed more positive attitudes toward…
Mapping the distribution of the denitrifier community at large scales (Invited)
NASA Astrophysics Data System (ADS)
Philippot, L.; Bru, D.; Ramette, A.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.
2010-12-01
Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 740 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.
NASA Technical Reports Server (NTRS)
Gross, S. H.
1981-01-01
The ASTP Doppler data were recalibrated, analyzed and related to geophysical phenomena and found consistent. Spectra were computed for data intervals covering each hemisphere. As many as 14 such intervals were analyzed. Wave structure is seen in much of the data. The spectra for all those intervals are very similar in a number of respects. They all decrease with frequency, or with decreasing wavelength. Power law fits are reasonable and spectral indices are found to range from about -2.0 to about -3.5. Both large scale (thousands of kilometers) and medium scale (hundreds of kilometers) waves are evident. These spectra are very similar to spectra of in situ measurements of neutrals and ionization measured by Atmosphere Explorer C.
Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy
2008-09-01
Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Effects of Large-Scale Solar Installations on Dust Mobilization and Air Quality
NASA Astrophysics Data System (ADS)
Pratt, J. T.; Singh, D.; Diffenbaugh, N. S.
2012-12-01
Large-scale solar projects are increasingly being developed worldwide and many of these installations are located in arid, desert regions. To examine the effects of these projects on regional dust mobilization and air quality, we analyze aerosol product data from NASA's Multi-angle Imaging Spectroradiometer (MISR) at annual and seasonal time intervals near fifteen photovoltaic and solar thermal stations ranging from 5-200 MW (12-4,942 acres) in size. The stations are distributed over eight different countries and were chosen based on size, location and installation date; most of the installations are large-scale, took place in desert climates and were installed between 2006 and 2010. We also consider air quality measurements of particulate matter between 2.5 and 10 micrometers (PM10) from the Environmental Protection Agency (EPA) monitoring sites near and downwind from the project installations in the U.S. We use monthly wind data from the NOAA's National Center for Atmospheric Prediction (NCEP) Global Reanalysis to select the stations downwind from the installations, and then perform statistical analysis on the data to identify any significant changes in these quantities. We find that fourteen of the fifteen regions have lower aerosol product after the start of the installations as well as all six PM10 monitoring stations showing lower particulate matter measurements after construction commenced. Results fail to show any statistically significant differences in aerosol optical index or PM10 measurements before and after the large-scale solar installations. However, many of the large installations are very recent, and there is insufficient data to fully understand the long-term effects on air quality. More data and higher resolution analysis is necessary to better understand the relationship between large-scale solar, dust and air quality.
NASA Astrophysics Data System (ADS)
Good, Garrett; Gerashchenko, Sergiy; Warhaft, Zellman
2010-11-01
Water droplets of sub-Kolmogorov size are sprayed into the turbulence side of a shearless turbulent-non-turbulent interface (TNI) as well as a turbulent-turbulent interface (TTI). An active grid is used to form the mixing layer and a splitter plate separates the droplet-non droplet interface near the origin. Particle concentration, size and velocity are determined by Phase Doppler Particle Analyzer, the velocity field by hot wires, and the droplet accelerations by particle tracking. As for a passive scalar, for the TTI, the concentration profiles are described by an error function. For the TNI, the concentration profiles fall off more rapidly than for the TTI due to the large-scale intermittency. The profile evolution and effects of initial conditions are discussed, as are the relative importance of the large and small scales in the transport process. It is shown that the concentration statistics are better described in terms of the Stokes number based on the large scales than the small, but some features of the mixing are determined by the small scales, and these will be discussed. Sponsored by the U.S. NSF.
Quantum error correction in crossbar architectures
NASA Astrophysics Data System (ADS)
Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie
2018-07-01
A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.
Montresor, Antonio; Cong, Dai Tran; Sinuon, Mouth; Tsuyuoka, Reiko; Chanthavisouk, Chitsavang; Strandgaard, Hanne; Velayudhan, Raman; Capuano, Corinne M.; Le Anh, Tuan; Tee Dató, Ah S.
2008-01-01
In 2001, Urbani and Palmer published a review of the epidemiological situation of helminthiases in the countries of the Western Pacific Region of the World Health Organization indicating the control needs in the region. Six years after this inspiring article, large-scale preventive chemotherapy for the control of helminthiasis has scaled up dramatically in the region. This paper analyzes the most recent published and unpublished country information on large-scale preventive chemotherapy and summarizes the progress made since 2000. Almost 39 million treatments were provided in 2006 in the region for the control of helminthiasis: nearly 14 million for the control of lymphatic filariasis, more than 22 million for the control of soil-transmitted helminthiasis, and over 2 million for the control of schistosomiasis. In general, control of these helminthiases is progressing well in the Mekong countries and Pacific Islands. In China, despite harboring the majority of the helminth infections of the region, the control activities have not reached the level of coverage of countries with much more limited financial resources. The control of food-borne trematodes is still limited, but pilot activities have been initiated in China, Lao People's Democratic Republic, and Vietnam. PMID:18846234
Experimental study of detonation of large-scale powder-droplet-vapor mixtures
NASA Astrophysics Data System (ADS)
Bai, C.-H.; Wang, Y.; Xue, K.; Wang, L.-F.
2018-05-01
Large-scale experiments were carried out to investigate the detonation performance of a 1600-m3 ternary cloud consisting of aluminum powder, fuel droplets, and vapor, which were dispersed by a central explosive in a cylindrically stratified configuration. High-frame-rate video cameras and pressure gauges were used to analyze the large-scale explosive dispersal of the mixture and the ensuing blast wave generated by the detonation of the cloud. Special attention was focused on the effect of the descending motion of the charge on the detonation performance of the dispersed ternary cloud. The charge was parachuted by an ensemble of apparatus from the designated height in order to achieve the required terminal velocity when the central explosive was detonated. A descending charge with a terminal velocity of 32 m/s produced a cloud with discernably increased concentration compared with that dispersed from a stationary charge, the detonation of which hence generates a significantly enhanced blast wave beyond the scaled distance of 6 m/kg^{1/3}. The results also show the influence of the descending motion of the charge on the jetting phenomenon and the distorted shock front.
Evaluating the Large-Scale Environment of Extreme Events Using Reanalyses
NASA Astrophysics Data System (ADS)
Bosilovich, M. G.; Schubert, S. D.; Koster, R. D.; da Silva, A. M., Jr.; Eichmann, A.
2014-12-01
Extreme conditions and events have always been a long standing concern in weather forecasting and national security. While some evidence indicates extreme weather will increase in global change scenarios, extremes are often related to the large scale atmospheric circulation, but also occurring infrequently. Reanalyses assimilate substantial amounts of weather data and a primary strength of reanalysis data is the representation of the large-scale atmospheric environment. In this effort, we link the occurrences of extreme events or climate indicators to the underlying regional and global weather patterns. Now, with greater than 3o years of data, reanalyses can include multiple cases of extreme events, and thereby identify commonality among the weather to better characterize the large-scale to global environment linked to the indicator or extreme event. Since these features are certainly regionally dependent, and also, the indicators of climate are continually being developed, we outline various methods to analyze the reanalysis data and the development of tools to support regional evaluation of the data. Here, we provide some examples of both individual case studies and composite studies of similar events. For example, we will compare the large scale environment for Northeastern US extreme precipitation with that of highest mean precipitation seasons. Likewise, southerly winds can shown to be a major contributor to very warm days in the Northeast winter. While most of our development has involved NASA's MERRA reanalysis, we are also looking forward to MERRA-2 which includes several new features that greatly improve the representation of weather and climate, especially for the regions and sectors involved in the National Climate Assessment.
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Multi-scale Modeling of Radiation Damage: Large Scale Data Analysis
NASA Astrophysics Data System (ADS)
Warrier, M.; Bhardwaj, U.; Bukkuru, S.
2016-10-01
Modification of materials in nuclear reactors due to neutron irradiation is a multiscale problem. These neutrons pass through materials creating several energetic primary knock-on atoms (PKA) which cause localized collision cascades creating damage tracks, defects (interstitials and vacancies) and defect clusters depending on the energy of the PKA. These defects diffuse and recombine throughout the whole duration of operation of the reactor, thereby changing the micro-structure of the material and its properties. It is therefore desirable to develop predictive computational tools to simulate the micro-structural changes of irradiated materials. In this paper we describe how statistical averages of the collision cascades from thousands of MD simulations are used to provide inputs to Kinetic Monte Carlo (KMC) simulations which can handle larger sizes, more defects and longer time durations. Use of unsupervised learning and graph optimization in handling and analyzing large scale MD data will be highlighted.
NASA Astrophysics Data System (ADS)
Cuzzi, Jeffrey N.; Weston, B.; Shariff, K.
2013-10-01
Primitive bodies with 10s-100s of km diameter (or even larger) may form directly from small nebula constituents, bypassing the step-by-step “incremental growth” that faces a variety of barriers at cm, m, and even 1-10km sizes. In the scenario of Cuzzi et al (Icarus 2010 and LPSC 2012; see also Chambers Icarus 2010) the immediate precursors of 10-100km diameter asteroid formation are dense clumps of chondrule-(mm-) size objects. These predictions utilize a so-called cascade model, which is popular in turbulence studies. One of its usual assumptions is that certain statistical properties of the process (the so-called multiplier pdfs p(m)) are scale-independent within a cascade of energy from large eddy scales to smaller scales. In similar analyses, Pan et al (2011 ApJ) found discrepancies with results of Cuzzi and coworkers; one possibility was that p(m) for particle concentration is not scale-independent. To assess the situation we have analyzed recent 3D direct numerical simulations of particles in turbulence covering a much wider range of scales than analyzed by either Cuzzi and coworkers or by Pan and coworkers (see Bec et al 2010, J. Flu. Mech 646, 527). We calculated p(m) at scales ranging from 45-1024η where η is the Kolmogorov scale, for both particles with a range of stopping times spanning the optimum value, and for energy dissipation in the fluid. For comparison, the p(m) for dissipation have been observed to be scale-independent in atmospheric flows (at much larger Reynolds number) for scales of at least 30-3000η. We found that, in the numerical simulations, the multiplier distributions for both particle concentration and fluid dissipation are as expected at scales of tens of η, but both become narrower and less intermittent at larger scales. This is consistent with observations of atmospheric flows showing scale independence to >3000η if scale-free behavior is established only after some number 10 of large-scale bifurcations (at scales perhaps 10x smaller than the largest scales in the flow), but become scale-free at smaller scales. Predictions of primitive body initial mass functions can now be redone using a slightly modified cascade.
Analyzing the Validity of Relationship Banking through Agent-based Modeling
NASA Astrophysics Data System (ADS)
Nishikido, Yukihito; Takahashi, Hiroshi
This article analyzes the validity of relationship banking through agent-based modeling. In the analysis, we especially focus on the relationship between economic conditions and both lenders' and borrowers' behaviors. As a result of intensive experiments, we made the following interesting findings: (1) Relationship banking contributes to reducing bad loan; (2) relationship banking is more effective in enhancing the market growth compared to transaction banking, when borrowers' sales scale is large; (3) keener competition among lenders may bring inefficiency to the market.
Coupled continuous time-random walks in quenched random environment
NASA Astrophysics Data System (ADS)
Magdziarz, M.; Szczotka, W.
2018-02-01
We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.
New probes of Cosmic Microwave Background large-scale anomalies
NASA Astrophysics Data System (ADS)
Aiola, Simone
Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the tensions between Planck, WMAP, and SPT temperature data and how the upcoming data release of the ACTpol experiment will contribute to this matter. We provide a description of the current status of the data-analysis pipeline and discuss its ability to recover large-scale modes.
Low energy peripheral scaling in nucleon-nucleon scattering and uncertainty quantification
NASA Astrophysics Data System (ADS)
Ruiz Simo, I.; Amaro, J. E.; Ruiz Arriola, E.; Navarro Pérez, R.
2018-03-01
We analyze the peripheral structure of the nucleon-nucleon interaction for LAB energies below 350 MeV. To this end we transform the scattering matrix into the impact parameter representation by analyzing the scaled phase shifts (L + 1/2) δ JLS (p) and the scaled mixing parameters (L + 1/2)ɛ JLS (p) in terms of the impact parameter b = (L + 1/2)/p. According to the eikonal approximation, at large angular momentum L these functions should become an universal function of b, independent on L. This allows to discuss in a rather transparent way the role of statistical and systematic uncertainties in the different long range components of the two-body potential. Implications for peripheral waves obtained in chiral perturbation theory interactions to fifth order (N5LO) or from the large body of NN data considered in the SAID partial wave analysis are also drawn from comparing them with other phenomenological high-quality interactions, constructed to fit scattering data as well. We find that both N5LO and SAID peripheral waves disagree more than 5σ with the Granada-2013 statistical analysis, more than 2σ with the 6 statistically equivalent potentials fitting the Granada-2013 database and about 1σ with the historical set of 13 high-quality potentials developed since the 1993 Nijmegen analysis.
Zorick, Todd; Mandelkern, Mark A
2015-01-01
Electroencephalography (EEG) is typically viewed through the lens of spectral analysis. Recently, multiple lines of evidence have demonstrated that the underlying neuronal dynamics are characterized by scale-free avalanches. These results suggest that techniques from statistical physics may be used to analyze EEG signals. We utilized a publicly available database of fourteen subjects with waking and sleep stage 2 EEG tracings per subject, and observe that power-law dynamics of critical-state neuronal avalanches are not sufficient to fully describe essential features of EEG signals. We hypothesized that this could reflect the phenomenon of discrete scale invariance (DSI) in EEG large voltage deflections (LVDs) as being more prominent in waking consciousness. We isolated LVDs, and analyzed logarithmically transformed LVD size probability density functions (PDF) to assess for DSI. We find evidence of increased DSI in waking, as opposed to sleep stage 2 consciousness. We also show that the signatures of DSI are specific for EEG LVDs, and not a general feature of fractal simulations with similar statistical properties to EEG. Removing only LVDs from waking EEG produces a reduction in power in the alpha and beta frequency bands. These findings may represent a new insight into the understanding of the cortical dynamics underlying consciousness.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
Scaling and memory in volatility return intervals in financial markets
NASA Astrophysics Data System (ADS)
Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene
2005-06-01
For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval [Formula] as [Formula]. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ˜ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. Author contributions: S.H. and H.E.S. designed research; K.Y., L.M., S.H., and H.E.S. performed research; A.B. contributed new reagents/analytic tools; A.B. analyzed data; and S.H. wrote the paper.Abbreviations: pdf, probability density function; S&P 500, Standard and Poor's 500 Index; USD, U.S. dollar; JPY, Japanese yen; SEK, Swedish krona.
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
NASA Astrophysics Data System (ADS)
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Scaling behavior of an airplane-boarding model.
Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard
2013-04-01
An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time
Neutrino footprint in large scale structure
NASA Astrophysics Data System (ADS)
Garay, Carlos Peña; Verde, Licia; Jimenez, Raul
2017-03-01
Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.
Population trends from the North American Breeding Bird Survey
Peterjohn, B.G.; Sauer, J.R.; Robbins, C.S.; Martin, Thomas E.; Finch, Deborah M.
1995-01-01
INTRODUCTION: Most Neotropical migrant birds are difficult to count accurately and are moderately common over large breeding distributions. Consequently, little historical information exists on their large-scale population changes, and most of this information is anecdotal. Surveys begun in this century such as Breeding Bird Censuses and Christmas Bird Counts have the potential to provide this information, but only the North American Breeding Bird Survey (BBS) achieves the extensive continental coverage necessary to document population changes for most Neotropical migrant birds. Conservationists and ecologists have begun to use BBS data to estimate population trends, but there is still widespread confusion over exactly what these data show regarding population changes. In this chapter, we review the current state of knowledge regarding population changes in Neotropical migrant birds and the methods used to analyze these changes. The primary emphasis is on the BBS (Robbins et al. 1986) because this survey provides the best available data for estimating trends of Neotropical migrants on a continental scale. To address questions about methods of analyzing survey data, we review and compare some alternative methods of analyzing BBS data. We also discuss the effectiveness of the BBS in sampling Neotropical migrant species, and review possibilities for use of alternative data sets to verify trends from the BBS.
Jeltsch, Florian; Wurst, Susanne
2015-01-01
Small scale distribution of insect root herbivores may promote plant species diversity by creating patches of different herbivore pressure. However, determinants of small scale distribution of insect root herbivores, and impact of land use intensity on their small scale distribution are largely unknown. We sampled insect root herbivores and measured vegetation parameters and soil water content along transects in grasslands of different management intensity in three regions in Germany. We calculated community-weighted mean plant traits to test whether the functional plant community composition determines the small scale distribution of insect root herbivores. To analyze spatial patterns in plant species and trait composition and insect root herbivore abundance we computed Mantel correlograms. Insect root herbivores mainly comprised click beetle (Coleoptera, Elateridae) larvae (43%) in the investigated grasslands. Total insect root herbivore numbers were positively related to community-weighted mean traits indicating high plant growth rates and biomass (specific leaf area, reproductive- and vegetative plant height), and negatively related to plant traits indicating poor tissue quality (leaf C/N ratio). Generalist Elaterid larvae, when analyzed independently, were also positively related to high plant growth rates and furthermore to root dry mass, but were not related to tissue quality. Insect root herbivore numbers were not related to plant cover, plant species richness and soil water content. Plant species composition and to a lesser extent plant trait composition displayed spatial autocorrelation, which was not influenced by land use intensity. Insect root herbivore abundance was not spatially autocorrelated. We conclude that in semi-natural grasslands with a high share of generalist insect root herbivores, insect root herbivores affiliate with large, fast growing plants, presumably because of availability of high quantities of food. Affiliation of insect root herbivores with large, fast growing plants may counteract dominance of those species, thus promoting plant diversity. PMID:26517119
NASA Technical Reports Server (NTRS)
Mckinzie, D. J., Jr.; Burns, R. J.; Wagner, J. M.
1976-01-01
Noise data were obtained with a large-scale cold-flow model of a two-flap, under-the-wing, externally blown flap proposed for use on future STOL aircraft. The noise suppression effectiveness of locating a slot conical nozzle at the trailing edge of the second flap and of applying partial covers to the slots between the wing and flaps was evaluated. Overall-sound-pressure-level reductions of 5 db occurred below the wing in the flyover plane. Existing models of several noise sources were applied to the test results. The resulting analytical relation compares favorably with the test data. The noise source mechanisms were analyzed and are discussed.
NASA Astrophysics Data System (ADS)
Tenney, Andrew; Coleman, Thomas; Berry, Matthew; Magstadt, Andy; Gogineni, Sivaram; Kiel, Barry
2015-11-01
Shock cells and large scale structures present in a three-stream non-axisymmetric jet are studied both qualitatively and quantitatively. Large Eddy Simulation is utilized first to gain an understanding of the underlying physics of the flow and direct the focus of the physical experiment. The flow in the experiment is visualized using long exposure Schlieren photography, with time resolved Schlieren photography also a possibility. Velocity derivative diagnostics are calculated from the grey-scale Schlieren images are analyzed using continuous wavelet transforms. Pressure signals are also captured in the near-field of the jet to correlate with the velocity derivative diagnostics and assist in unraveling this complex flow. We acknowledge the support of AFRL through an SBIR grant.
A fully integrated standalone portable cavity ringdown breath acetone analyzer.
Sun, Meixiu; Jiang, Chenyu; Gong, Zhiyong; Zhao, Xiaomeng; Chen, Zhuying; Wang, Zhennan; Kang, Meiling; Li, Yingxin; Wang, Chuji
2015-09-01
Breath analysis is a promising new technique for nonintrusive disease diagnosis and metabolic status monitoring. One challenging issue in using a breath biomarker for potential particular disease screening is to find a quantitative relationship between the concentration of the breath biomarker and clinical diagnostic parameters of the specific disease. In order to address this issue, we need a new instrument that is capable of conducting real-time, online breath analysis with high data throughput, so that a large scale of clinical test (more subjects) can be achieved in a short period of time. In this work, we report a fully integrated, standalone, portable analyzer based on the cavity ringdown spectroscopy technique for near-real time, online breath acetone measurements. The performance of the portable analyzer in measurements of breath acetone was interrogated and validated by using the certificated gas chromatography-mass spectrometry. The results show that this new analyzer is useful for reliable online (online introduction of a breath sample without pre-treatment) breath acetone analysis with high sensitivity (57 ppb) and high data throughput (one data per second). Subsequently, the validated breath analyzer was employed for acetone measurements in 119 human subjects under various situations. The instrument design, packaging, specifications, and future improvements were also described. From an optical ringdown cavity operated by the lab-set electronics reported previously to this fully integrated standalone new instrument, we have enabled a new scientific tool suited for large scales of breath acetone analysis and created an instrument platform that can even be adopted for study of other breath biomarkers by using different lasers and ringdown mirrors covering corresponding spectral fingerprints.
A fully integrated standalone portable cavity ringdown breath acetone analyzer
NASA Astrophysics Data System (ADS)
Sun, Meixiu; Jiang, Chenyu; Gong, Zhiyong; Zhao, Xiaomeng; Chen, Zhuying; Wang, Zhennan; Kang, Meiling; Li, Yingxin; Wang, Chuji
2015-09-01
Breath analysis is a promising new technique for nonintrusive disease diagnosis and metabolic status monitoring. One challenging issue in using a breath biomarker for potential particular disease screening is to find a quantitative relationship between the concentration of the breath biomarker and clinical diagnostic parameters of the specific disease. In order to address this issue, we need a new instrument that is capable of conducting real-time, online breath analysis with high data throughput, so that a large scale of clinical test (more subjects) can be achieved in a short period of time. In this work, we report a fully integrated, standalone, portable analyzer based on the cavity ringdown spectroscopy technique for near-real time, online breath acetone measurements. The performance of the portable analyzer in measurements of breath acetone was interrogated and validated by using the certificated gas chromatography-mass spectrometry. The results show that this new analyzer is useful for reliable online (online introduction of a breath sample without pre-treatment) breath acetone analysis with high sensitivity (57 ppb) and high data throughput (one data per second). Subsequently, the validated breath analyzer was employed for acetone measurements in 119 human subjects under various situations. The instrument design, packaging, specifications, and future improvements were also described. From an optical ringdown cavity operated by the lab-set electronics reported previously to this fully integrated standalone new instrument, we have enabled a new scientific tool suited for large scales of breath acetone analysis and created an instrument platform that can even be adopted for study of other breath biomarkers by using different lasers and ringdown mirrors covering corresponding spectral fingerprints.
NASA Astrophysics Data System (ADS)
Dai, A. J.; Chen, Z. Y.; Huang, D. W.; Tong, R. H.; Zhang, J.; Wei, Y. N.; Ma, T. K.; Wang, X. L.; Yang, H. Y.; Gao, H. L.; Pan, Y.; the J-TEXT Team
2018-05-01
A large number of runaway electrons (REs) with energies as high as several tens of mega-electron volt (MeV) may be generated during disruptions on a large-scale tokamak. The kinetic energy carried by REs is eventually deposited on the plasma-facing components, causing damage and posing a threat on the operation of the tokamak. The remaining magnetic energy following a thermal quench is significant on a large-scale tokamak. The conversion of magnetic energy to runaway kinetic energy will increase the threat of runaway electrons on the first wall. The magnetic energy dissipated inside the vacuum vessel (VV) equals the decrease of initial magnetic energy inside the VV plus the magnetic energy flowing into the VV during a disruption. Based on the estimated magnetic energy, the evolution of magnetic-kinetic energy conversion are analyzed through three periods in disruptions with a runaway current plateau.
Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience.
Paninski, L; Cunningham, J P
2018-06-01
Modern large-scale multineuronal recording methodologies, including multielectrode arrays, calcium imaging, and optogenetic techniques, produce single-neuron resolution data of a magnitude and precision that were the realm of science fiction twenty years ago. The major bottlenecks in systems and circuit neuroscience no longer lie in simply collecting data from large neural populations, but also in understanding this data: developing novel scientific questions, with corresponding analysis techniques and experimental designs to fully harness these new capabilities and meaningfully interrogate these questions. Advances in methods for signal processing, network analysis, dimensionality reduction, and optimal control-developed in lockstep with advances in experimental neurotechnology-promise major breakthroughs in multiple fundamental neuroscience problems. These trends are clear in a broad array of subfields of modern neuroscience; this review focuses on recent advances in methods for analyzing neural time-series data with single-neuronal precision. Copyright © 2018 Elsevier Ltd. All rights reserved.
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
Dytrych, T.; Maris, P.; Launey, K. D.; ...
2016-06-22
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU3-selected subspaces. We demonstrate LSU3shell’s strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and significant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis affords memory savings in calculations of states withmore » a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less
Akanda, Ali Shafqat; Jutla, Antarpreet S.; Gute, David M.; Sack, R. Bradley; Alam, Munirul; Huq, Anwar; Colwell, Rita R.; Islam, Shafiqul
2013-01-01
The highly populated floodplains of the Bengal Delta have a long history of endemic and epidemic cholera outbreaks, both coastal and inland. Previous studies have not addressed the spatio-temporal dynamics of population vulnerability related to the influence of underlying large-scale processes. We analyzed spatial and temporal variability of cholera incidence across six surveillance sites in the Bengal Delta and their association with regional hydroclimatic and environmental drivers. More specifically, we use salinity and flood inundation modeling across the vulnerable districts of Bangladesh to test earlier proposed hypotheses on the role of these environmental variables. Our results show strong influence of seasonal and interannual variability in estuarine salinity on spring outbreaks and inland flooding on fall outbreaks. A large segment of the population in the Bengal Delta floodplains remain vulnerable to these biannual cholera transmission mechanisms that provide ecologic and environmental conditions for outbreaks over large geographic regions. PMID:24019441
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dytrych, T.; Maris, Pieter; Launey, K. D.
2016-06-09
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations ofmore » states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Jo, Hang-Hyun
2015-05-01
Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.
Continuous evolutionary change in Plio-Pleistocene mammals of eastern Africa
NASA Astrophysics Data System (ADS)
Bibi, Faysal; Kiessling, Wolfgang
2015-08-01
Much debate has revolved around the question of whether the mode of evolutionary and ecological turnover in the fossil record of African mammals was continuous or pulsed, and the degree to which faunal turnover tracked changes in global climate. Here, we assembled and analyzed large specimen databases of the fossil record of eastern African Bovidae (antelopes) and Turkana Basin large mammals. Our results indicate that speciation and extinction proceeded continuously throughout the Pliocene and Pleistocene, as did increases in the relative abundance of arid-adapted bovids, and in bovid body mass. Species durations were similar among clades with different ecological attributes. Occupancy patterns were unimodal, with long and nearly symmetrical origination and extinction phases. A single origination pulse may be present at 2.0-1.75 Ma, but besides this, there is no evidence that evolutionary or ecological changes in the eastern African record tracked rapid, 100,000-y-scale changes in global climate. Rather, eastern African large mammal evolution tracked global or regional climatic trends at long (million year) time scales, while local, basin-scale changes (e.g., tectonic or hydrographic) and biotic interactions ruled at shorter timescales.
Usage Patterns of Open Genomic Data
ERIC Educational Resources Information Center
Xia, Jingfeng; Liu, Ying
2013-01-01
This paper uses Genome Expression Omnibus (GEO), a data repository in biomedical sciences, to examine the usage patterns of open data repositories. It attempts to identify the degree of recognition of data reuse value and understand how e-science has impacted a large-scale scholarship. By analyzing a list of 1,211 publications that cite GEO data…
Sex Differences in Arithmetical Performance Scores: Central Tendency and Variability
ERIC Educational Resources Information Center
Martens, R.; Hurks, P. P. M.; Meijs, C.; Wassenberg, R.; Jolles, J.
2011-01-01
The present study aimed to analyze sex differences in arithmetical performance in a large-scale sample of 390 children (193 boys) frequenting grades 1-9. Past research in this field has focused primarily on average performance, implicitly assuming homogeneity of variance, for which support is scarce. This article examined sex differences in…
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of bioche...
Automated Essay Scoring versus Human Scoring: A Correlational Study
ERIC Educational Resources Information Center
Wang, Jinhao; Brown, Michelle Stallone
2008-01-01
The purpose of the current study was to analyze the relationship between automated essay scoring (AES) and human scoring in order to determine the validity and usefulness of AES for large-scale placement tests. Specifically, a correlational research design was used to examine the correlations between AES performance and human raters' performance.…
Validation of Automated Scoring of Oral Reading
ERIC Educational Resources Information Center
Balogh, Jennifer; Bernstein, Jared; Cheng, Jian; Van Moere, Alistair; Townshend, Brent; Suzuki, Masanori
2012-01-01
A two-part experiment is presented that validates a new measurement tool for scoring oral reading ability. Data collected by the U.S. government in a large-scale literacy assessment of adults were analyzed by a system called VersaReader that uses automatic speech recognition and speech processing technologies to score oral reading fluency. In the…
Data Management Practices and Perspectives of Atmospheric Scientists and Engineering Faculty
ERIC Educational Resources Information Center
Wiley, Christie; Mischo, William H.
2016-01-01
This article analyzes 21 in-depth interviews of engineering and atmospheric science faculty at the University of Illinois Urbana-Champaign (UIUC) to determine faculty data management practices and needs within the context of their research activities. A detailed literature review of previous large-scale and institutional surveys and interviews…
USDA-ARS?s Scientific Manuscript database
Meishan is a famous Chinese indigenous pig breed known for its extremely high fecundity. To explore if Meishan has unique evolutionary process and genome characteristics differing from other pig breeds, we systematically analyzed its genetic divergence, and demographic history by large-scale reseque...
The Building of Multimedia Communications Network based on Session Initiation Protocol
NASA Astrophysics Data System (ADS)
Yuexiao, Han; Yanfu, Zhang
In this paper, we presented a novel design for a distributed multimedia communications network. We introduced the distributed tactic, flow procedure and particular structure. We also analyzed its scalability, stability, robustness, extension, and transmission delay of this architecture. Finally, the result shows our framework is suitable for very large scale communications.
Data Mining in Earth System Science (DMESS 2011)
Forrest M. Hoffman; J. Walter Larson; Richard Tran Mills; Bhorn-Gustaf Brooks; Auroop R. Ganguly; William Hargrove; et al
2011-01-01
From field-scale measurements to global climate simulations and remote sensing, the growing body of very large and long time series Earth science data are increasingly difficult to analyze, visualize, and interpret. Data mining, information theoretic, and machine learning techniquesâsuch as cluster analysis, singular value decomposition, block entropy, Fourier and...
Analyzing animal movement patterns using potential functions
H. K. Preisler; A. A. Ager; M. J. Wisdom
2013-01-01
The advent of GPS technology has made it possible to study human-wildlife interactions on large landscapes and quantify behavioral responses to recreation and other anthropogenic disturbances at increasingly fine scales. Of particular interest are the potential impacts on habitat use patterns, energetics, and cascading impacts on fecundity and other life history traits...
Do Sustainability Projects Stimulate Organizational Learning in Universities?
ERIC Educational Resources Information Center
Albrecht, Patrick; Burandt, Simon; Schaltegger, Stefan
2007-01-01
Purpose: The purpose of this paper is to analyze the preparation of a sustainability report and a large-scale energy-saving campaign with regards to their role for organizational learning (OL). Similar processes indicating OL were observed during the implementation of both projects. Along the lines of a theoretical framework of OL these processes…
NASA Astrophysics Data System (ADS)
Liu, Jing-cheng; Wei, Xiu-ting; Zhou, Zhi-yong; Wei, Zhen-wen
2018-03-01
The fluid-structure interaction performance of plate-fin heat exchanger (PFHE) with serrated fins in large scale air-separation equipment was investigated in this paper. The stress and deformation of fins were analyzed, besides, the interaction equations were deduced by Galerkin method. The governing equations of fluid flow and heat transfer in PFHE were deduced by finite volume method (FVM). The distribution of strain and stress were calculated in large scale air separation equipment and the coupling situation of serrated fins under laminar situation was analyzed. The results indicated that the interactions between fins and fluid flow in the exchanger have significant impacts on heat transfer enhancement, meanwhile, the strain and stress of fins includes dynamic pressure of the sealing head and flow impact with the increase of flow velocity. The impacts are especially significant at the conjunction of two fins because of the non-alignment fins. It can be concluded that the soldering process and channel width led to structure deformation of fins in the exchanger, and degraded heat transfer efficiency.
Scale-invariant properties of public-debt growth
NASA Astrophysics Data System (ADS)
Petersen, A. M.; Podobnik, B.; Horvatic, D.; Stanley, H. E.
2010-05-01
Public debt is one of the important economic variables that quantitatively describes a nation's economy. Because bankruptcy is a risk faced even by institutions as large as governments (e.g., Iceland), national debt should be strictly controlled with respect to national wealth. Also, the problem of eliminating extreme poverty in the world is closely connected to the study of extremely poor debtor nations. We analyze the time evolution of national public debt and find "convergence": initially less-indebted countries increase their debt more quickly than initially more-indebted countries. We also analyze the public debt-to-GDP ratio {\\cal R} , a proxy for default risk, and approximate the probability density function P({\\cal R}) with a Gamma distribution, which can be used to establish thresholds for sustainable debt. We also observe "convergence" in {\\cal R} : countries with initially small {\\cal R} increase their {\\cal R} more quickly than countries with initially large {\\cal R} . The scaling relationships for debt and {\\cal R} have practical applications, e.g. the Maastricht Treaty requires members of the European Monetary Union to maintain {\\cal R} < 0.6 .
NASA Astrophysics Data System (ADS)
Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi
2017-09-01
A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Mapping the Energy Cascade in the North Atlantic Ocean: The Coarse-graining Approach
Aluie, Hussein; Hecht, Matthew; Vallis, Geoffrey K.
2017-11-14
A coarse-graining framework is implemented to analyze nonlinear processes, measure energy transfer rates and map out the energy pathways from simulated global ocean data. Traditional tools to measure the energy cascade from turbulence theory, such as spectral flux or spectral transfer rely on the assumption of statistical homogeneity, or at least a large separation between the scales of motion and the scales of statistical inhomogeneity. The coarse-graining framework allows for probing the fully nonlinear dynamics simultaneously in scale and in space, and is not restricted by those assumptions. This study describes how the framework can be applied to ocean flows.
Mapping the Energy Cascade in the North Atlantic Ocean: The Coarse-graining Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aluie, Hussein; Hecht, Matthew; Vallis, Geoffrey K.
A coarse-graining framework is implemented to analyze nonlinear processes, measure energy transfer rates and map out the energy pathways from simulated global ocean data. Traditional tools to measure the energy cascade from turbulence theory, such as spectral flux or spectral transfer rely on the assumption of statistical homogeneity, or at least a large separation between the scales of motion and the scales of statistical inhomogeneity. The coarse-graining framework allows for probing the fully nonlinear dynamics simultaneously in scale and in space, and is not restricted by those assumptions. This study describes how the framework can be applied to ocean flows.
Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy
2008-01-01
Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies. PMID:19137113
Multi-scale Slip Inversion Based on Simultaneous Spatial and Temporal Domain Wavelet Transform
NASA Astrophysics Data System (ADS)
Liu, W.; Yao, H.; Yang, H. Y.
2017-12-01
Finite fault inversion is a widely used method to study earthquake rupture processes. Some previous studies have proposed different methods to implement finite fault inversion, including time-domain, frequency-domain, and wavelet-domain methods. Many previous studies have found that different frequency bands show different characteristics of the seismic rupture (e.g., Wang and Mori, 2011; Yao et al., 2011, 2013; Uchide et al., 2013; Yin et al., 2017). Generally, lower frequency waveforms correspond to larger-scale rupture characteristics while higher frequency data are representative of smaller-scale ones. Therefore, multi-scale analysis can help us understand the earthquake rupture process thoroughly from larger scale to smaller scale. By the use of wavelet transform, the wavelet-domain methods can analyze both the time and frequency information of signals in different scales. Traditional wavelet-domain methods (e.g., Ji et al., 2002) implement finite fault inversion with both lower and higher frequency signals together to recover larger-scale and smaller-scale characteristics of the rupture process simultaneously. Here we propose an alternative strategy with a two-step procedure, i.e., firstly constraining the larger-scale characteristics with lower frequency signals, and then resolving the smaller-scale ones with higher frequency signals. We have designed some synthetic tests to testify our strategy and compare it with the traditional one. We also have applied our strategy to study the 2015 Gorkha Nepal earthquake using tele-seismic waveforms. Both the traditional method and our two-step strategy only analyze the data in different temporal scales (i.e., different frequency bands), while the spatial distribution of model parameters also shows multi-scale characteristics. A more sophisticated strategy is to transfer the slip model into different spatial scales, and then analyze the smooth slip distribution (larger scales) with lower frequency data firstly and more detailed slip distribution (smaller scales) with higher frequency data subsequently. We are now implementing the slip inversion using both spatial and temporal domain wavelets. This multi-scale analysis can help us better understand frequency-dependent rupture characteristics of large earthquakes.
NASA Astrophysics Data System (ADS)
Han, Junwon
The remarkable development of polymer synthesis techniques to make complex polymers with controlled chain architectures has inevitably demanded the advancement of polymer characterization tools to analyze the molecular dispersity in polymeric materials beyond size exclusion chromatography (SEC). In particular, man-made synthetic copolymers that consist of more than one monomer type are disperse mixtures of polymer chains that have distributions in terms of both chemical heterogeneity and chain length (molar mass). While the molecular weight distribution has been quite reliably estimated by the SEC, it is still challenging to properly characterize the chemical composition distribution in the copolymers. Here, I have developed and applied adsorption-based interaction chromatography (IC) techniques as a promising tool to characterize and fractionate polystyrene-based block, random and branched copolymers in terms of their chemical heterogeneity. The first part of this thesis is focused on the adsorption-desorption based purification of PS-b-PMMA diblock copolymers using nanoporous silica. The liquid chromatography analysis and large scale purification are discussed for the PS-b-PMMA block copolymers that have been synthesized by sequential anionic polymerization. SEC and IC are compared to critically analyze the contents of PS homopolymers in the as-synthesized block copolymers. In addition, I have developed an IC technique to provide faster and more reliable information on the chemical heterogeneity in the as-synthesized block copolymers. Finally, a large scale (multi-gram) separation technique is developed to obtain "homopolymer-free" block copolymers via a simple chromatographic filtration technique. By taking advantage of the large specific surface area of nanoporous silica (≈300m 2/g), large scale purification of neat PS-b-PMMA has successfully been achieved by controlling adsorption and desorption of the block copolymers on the silica gel surface using a gravity column. The second part of this thesis is focused on the liquid chromatography analysis and fractionation of RAFT-polymerized PS-b -PMMA diblock copolymers and AFM studies. In this study, PS- b-PMMA block copolymers were synthesized by a RAFT free radical polymerization process---the PMMA block with a phenyldithiobenzoate end group was synthesized first. The contents of unreacted PS and PMMA homopolymers in as-synthesized PS-b-PMMA block copolymers were quantitatively analyzed by solvent gradient interaction chromatography (SGIC) technique employing bare silica and C18-bonded silica columns, respectively. In addition, by 2-dimensional large-scale IC fractionation method, atomic force microscopy (AFM) study of these fractionated samples revealed various morphologies with respect to the chemical composition of each fraction. The third part of this thesis is to analyze random copolymers with tunable monomer sequence distributions using interaction chromatography. Here, IC was used for characterizing the composition and monomer sequence distribution in statistical copolymers of poly(styrene-co-4-bromostyrene) (PBrxS). The PBrS copolymers were synthesized by the bromination of monodisperse polystyrenes; the degree of bromination (x) and the sequence distribution were adjusted by varying the bromination time and the solvent quality, respectively. Both normal-phase (bare silica) and reversed-phase (C18-bonded silica) columns were used at different combinations of solvents and non-solvents to monitor the content of the 4-bromostyrene units in the copolymer and their average monomer sequence distribution. The fourth part of this thesis is to analyze and fractionate highly branched polymers such as dendronized polymers and star-shaped homo and copolymers. I have developed an interaction chromatography technique to separate polymers with nonlinear chain architecture. Specifically, the IC technique has been used to separate dendronized polymers and PS-based highly branched copolymers and to ultimately obtain well-defined dendronized or branched copolymers with a low polydispersity. The effects of excess arm-polymers on (1) the micellar self-assembly of dendronized polymers and (2) the regularity of the pore morphology in the low-k applications by the sol-gel process have been studied.
Li, Guo Chun; Song, Hua Dong; Li, Qi; Bu, Shu Hai
2017-11-01
In Abies fargesii forests of the giant panda's habitats in Mt. Taibai, the spatial distribution patterns and interspecific associations of main tree species and their spatial associations with the understory flowering Fargesia qinlingensis were analyzed at multiple scales by univariate and bivaria-te O-ring function in point pattern analysis. The results showed that in the A. fargesii forest, the number of A. fargesii was largest but its population structure was in decline. The population of Betula platyphylla was relatively young, with a stable population structure, while the population of B. albo-sinensis declined. The three populations showed aggregated distributions at small scales and gradually showed random distributions with increasing spatial scales. Spatial associations among tree species were mainly showed at small scales and gradually became not spatially associated with increasing scale. A. fargesii and B. platyphylla were positively associated with flowering F. qinlingensis at large and medium scales, whereas B. albo-sinensis showed negatively associated with flowering F. qinlingensis at large and medium scales. The interaction between trees and F. qinlingensis in the habitats of giant panda promoted the dynamic succession and development of forests, which changed the environment of giant panda's habitats in Qinling.
López-Padilla, Alexis; Ruiz-Rodriguez, Alejandro; Restrepo Flórez, Claudia Estela; Rivero Barrios, Diana Marsela; Reglero, Guillermo; Fornari, Tiziana
2016-06-25
Vaccinium meridionale Swartz (Mortiño or Colombian blueberry) is one of the Vaccinium species abundantly found across the Colombian mountains, which are characterized by high contents of polyphenolic compounds (anthocyanins and flavonoids). The supercritical fluid extraction (SFE) of Vaccinium species has mainly focused on the study of V. myrtillus L. (blueberry). In this work, the SFE of Mortiño fruit from Colombia was studied in a small-scale extraction cell (273 cm³) and different extraction pressures (20 and 30 MPa) and temperatures (313 and 343 K) were investigated. Then, process scaling-up to a larger extraction cell (1350 cm³) was analyzed using well-known semi-empirical engineering approaches. The Broken and Intact Cell (BIC) model was adjusted to represent the kinetic behavior of the low-scale extraction and to simulate the large-scale conditions. Extraction yields obtained were in the range 0.1%-3.2%. Most of the Mortiño solutes are readily accessible and, thus, 92% of the extractable material was recovered in around 30 min. The constant CO₂ residence time criterion produced excellent results regarding the small-scale kinetic curve according to the BIC model, and this conclusion was experimentally validated in large-scale kinetic experiments.
López-Padilla, Alexis; Ruiz-Rodriguez, Alejandro; Restrepo Flórez, Claudia Estela; Rivero Barrios, Diana Marsela; Reglero, Guillermo; Fornari, Tiziana
2016-01-01
Vaccinium meridionale Swartz (Mortiño or Colombian blueberry) is one of the Vaccinium species abundantly found across the Colombian mountains, which are characterized by high contents of polyphenolic compounds (anthocyanins and flavonoids). The supercritical fluid extraction (SFE) of Vaccinium species has mainly focused on the study of V. myrtillus L. (blueberry). In this work, the SFE of Mortiño fruit from Colombia was studied in a small-scale extraction cell (273 cm3) and different extraction pressures (20 and 30 MPa) and temperatures (313 and 343 K) were investigated. Then, process scaling-up to a larger extraction cell (1350 cm3) was analyzed using well-known semi-empirical engineering approaches. The Broken and Intact Cell (BIC) model was adjusted to represent the kinetic behavior of the low-scale extraction and to simulate the large-scale conditions. Extraction yields obtained were in the range 0.1%–3.2%. Most of the Mortiño solutes are readily accessible and, thus, 92% of the extractable material was recovered in around 30 min. The constant CO2 residence time criterion produced excellent results regarding the small-scale kinetic curve according to the BIC model, and this conclusion was experimentally validated in large-scale kinetic experiments. PMID:28773640
Intermittency measurement in two-dimensional bacterial turbulence
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Ding, Long; Huang, Yongxiang; Chen, Ming; Lu, Zhiming; Liu, Yulu; Zhou, Quan
2016-06-01
In this paper, an experimental velocity database of a bacterial collective motion, e.g., Bacillus subtilis, in turbulent phase with volume filling fraction 84 % provided by Professor Goldstein at Cambridge University (UK), was analyzed to emphasize the scaling behavior of this active turbulence system. This was accomplished by performing a Hilbert-based methodology analysis to retrieve the scaling property without the β -limitation. A dual-power-law behavior separated by the viscosity scale ℓν was observed for the q th -order Hilbert moment Lq(k ) . This dual-power-law belongs to an inverse-cascade since the scaling range is above the injection scale R , e.g., the bacterial body length. The measured scaling exponents ζ (q ) of both the small-scale (k >kν ) and large-scale (k
Tools for Large-Scale Mobile Malware Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierma, Michael
Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000more » Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.« less
NASA Astrophysics Data System (ADS)
Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong
2017-12-01
The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.
The Future of Wind Energy in California: Future Projections in Variable-Resolution CESM
NASA Astrophysics Data System (ADS)
Wang, M.; Ullrich, P. A.; Millstein, D.; Collier, C.
2017-12-01
This study focuses on the wind energy characterization and future projection at five primary wind turbine sites in California. Historical (1980-2000) and mid-century (2030-2050) simulations were produced using the Variable-Resolution Community Earth System Model (VR-CESM) to analyze the trends and variations in wind energy under climate change. Datasets from Det Norske Veritas Germanischer Llyod (DNV GL), MERRA-2, CFSR, NARR, as well as surface observational data were used for model validation and comparison. Significant seasonal wind speed changes under RCP8.5 were detected from several wind farm sites. Large-scale patterns were then investigated to analyze the synoptic-scale impact on localized wind change. The agglomerative clustering method was applied to analyze and group different wind patterns. The associated meteorological background of each cluster was investigated to analyze the drivers of different wind patterns. This study improves the characterization of uncertainty around the magnitude and variability in space and time of California's wind resources in the near future, and also enhances understanding of the physical mechanisms related to the trends in wind resource variability.
Novel Miscanthus Germplasm-Based Value Chains: A Life Cycle Assessment
Wagner, Moritz; Kiesel, Andreas; Hastings, Astley; Iqbal, Yasir; Lewandowski, Iris
2017-01-01
In recent years, considerable progress has been made in miscanthus research: improvement of management practices, breeding of new genotypes, especially for marginal conditions, and development of novel utilization options. The purpose of the current study was a holistic analysis of the environmental performance of such novel miscanthus-based value chains. In addition, the relevance of the analyzed environmental impact categories was assessed. A Life Cycle Assessment was conducted to analyse the environmental performance of the miscanthus-based value chains in 18 impact categories. In order to include the substitution of a reference product, a system expansion approach was used. In addition, a normalization step was applied. This allowed the relevance of these impact categories to be evaluated for each utilization pathway. The miscanthus was cultivated on six sites in Europe (Aberystwyth, Adana, Moscow, Potash, Stuttgart and Wageningen) and the biomass was utilized in the following six pathways: (1) small-scale combustion (heat)—chips; (2) small-scale combustion (heat)—pellets; (3) large-scale combustion (CHP)—biomass baled for transport and storage; (4) large-scale combustion (CHP)—pellets; (5) medium-scale biogas plant—ensiled miscanthus biomass; and (6) large-scale production of insulation material. Thus, in total, the environmental performance of 36 site × pathway combinations was assessed. The comparatively high normalized results of human toxicity, marine, and freshwater ecotoxicity, and freshwater eutrophication indicate the relevance of these impact categories in the assessment of miscanthus-based value chains. Differences between the six sites can almost entirely be attributed to variations in biomass yield. However, the environmental performance of the utilization pathways analyzed varied widely. The largest differences were shown for freshwater and marine ecotoxicity, and freshwater eutrophication. The production of insulation material had the lowest impact on the environment, with net benefits in all impact categories expect three (marine eutrophication, human toxicity, agricultural land occupation). This performance can be explained by the multiple use of the biomass, first as material and subsequently as an energy carrier, and by the substitution of an emission-intensive reference product. The results of this study emphasize the importance of assessing all environmental impacts when selecting appropriate utilization pathways. PMID:28642784
Prospect Theory for Online Financial Trading
Martino, Mauro; Altshuler, Yaniv
2014-01-01
Prospect theory is widely viewed as the best available descriptive model of how people evaluate risk in experimental settings. According to prospect theory, people are typically risk-averse with respect to gains and risk-seeking with respect to losses, known as the “reflection effect”. People are much more sensitive to losses than to gains of the same magnitude, a phenomenon called “loss aversion”. Despite of the fact that prospect theory has been well developed in behavioral economics at the theoretical level, there exist very few large-scale empirical studies and most of the previous studies have been undertaken with micro-panel data. Here we analyze over 28.5 million trades made by 81.3 thousand traders of an online financial trading community over 28 months, aiming to explore the large-scale empirical aspect of prospect theory. By analyzing and comparing the behavior of winning and losing trades and traders, we find clear evidence of the reflection effect and the loss aversion phenomenon, which are essential in prospect theory. This work hence demonstrates an unprecedented large-scale empirical evidence of prospect theory, which has immediate implication in financial trading, e.g., developing new trading strategies by minimizing the impact of the reflection effect and the loss aversion phenomenon. Moreover, we introduce three novel behavioral metrics to differentiate winning and losing traders based on their historical trading behavior. This offers us potential opportunities to augment online social trading where traders are allowed to watch and follow the trading activities of others, by predicting potential winners based on their historical trading behavior. PMID:25330203
Prospect theory for online financial trading.
Liu, Yang-Yu; Nacher, Jose C; Ochiai, Tomoshiro; Martino, Mauro; Altshuler, Yaniv
2014-01-01
Prospect theory is widely viewed as the best available descriptive model of how people evaluate risk in experimental settings. According to prospect theory, people are typically risk-averse with respect to gains and risk-seeking with respect to losses, known as the "reflection effect". People are much more sensitive to losses than to gains of the same magnitude, a phenomenon called "loss aversion". Despite of the fact that prospect theory has been well developed in behavioral economics at the theoretical level, there exist very few large-scale empirical studies and most of the previous studies have been undertaken with micro-panel data. Here we analyze over 28.5 million trades made by 81.3 thousand traders of an online financial trading community over 28 months, aiming to explore the large-scale empirical aspect of prospect theory. By analyzing and comparing the behavior of winning and losing trades and traders, we find clear evidence of the reflection effect and the loss aversion phenomenon, which are essential in prospect theory. This work hence demonstrates an unprecedented large-scale empirical evidence of prospect theory, which has immediate implication in financial trading, e.g., developing new trading strategies by minimizing the impact of the reflection effect and the loss aversion phenomenon. Moreover, we introduce three novel behavioral metrics to differentiate winning and losing traders based on their historical trading behavior. This offers us potential opportunities to augment online social trading where traders are allowed to watch and follow the trading activities of others, by predicting potential winners based on their historical trading behavior.
NUMERICAL SIMULATIONS OF CORONAL HEATING THROUGH FOOTPOINT BRAIDING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansteen, V.; Pontieu, B. De; Carlsson, M.
2015-10-01
Advanced three-dimensional (3D) radiative MHD simulations now reproduce many properties of the outer solar atmosphere. When including a domain from the convection zone into the corona, a hot chromosphere and corona are self-consistently maintained. Here we study two realistic models, with different simulated areas, magnetic field strength and topology, and numerical resolution. These are compared in order to characterize the heating in the 3D-MHD simulations which self-consistently maintains the structure of the atmosphere. We analyze the heating at both large and small scales and find that heating is episodic and highly structured in space, but occurs along loop-shaped structures, andmore » moves along with the magnetic field. On large scales we find that the heating per particle is maximal near the transition region and that widely distributed opposite-polarity field in the photosphere leads to a greater heating scale height in the corona. On smaller scales, heating is concentrated in current sheets, the thicknesses of which are set by the numerical resolution. Some current sheets fragment in time, this process occurring more readily in the higher-resolution model leading to spatially highly intermittent heating. The large-scale heating structures are found to fade in less than about five minutes, while the smaller, local, heating shows timescales of the order of two minutes in one model and one minutes in the other, higher-resolution, model.« less
Causality as an emergent macroscopic phenomenon: The Lee-Wick O(N) model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinstein, Benjamin; O'Connell, Donal; Wise, Mark B.
2009-05-15
In quantum mechanics the deterministic property of classical physics is an emergent phenomenon appropriate only on macroscopic scales. Lee and Wick introduced Lorentz invariant quantum theories where causality is an emergent phenomenon appropriate for macroscopic time scales. In this paper we analyze a Lee-Wick version of the O(N) model. We argue that in the large-N limit this theory has a unitary and Lorentz invariant S matrix and is therefore free of paradoxes in scattering experiments. We discuss some of its acausal properties.
Identification of Phosphorylated Proteins on a Global Scale.
Iliuk, Anton
2018-05-31
Liquid chromatography (LC) coupled with tandem mass spectrometry (MS/MS) has enabled researchers to analyze complex biological samples with unprecedented depth. It facilitates the identification and quantification of modifications within thousands of proteins in a single large-scale proteomic experiment. Analysis of phosphorylation, one of the most common and important post-translational modifications, has particularly benefited from such progress in the field. Here, detailed protocols are provided for a few well-regarded, common sample preparation methods for an effective phosphoproteomic experiment. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.
Radiation breakage of DNA: a model based on random-walk chromatin structure
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Sachs, R. K.
2001-01-01
Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
NeuroLines: A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity.
Al-Awami, Ali K; Beyer, Johanna; Strobelt, Hendrik; Kasthuri, Narayanan; Lichtman, Jeff W; Pfister, Hanspeter; Hadwiger, Markus
2014-12-01
We present NeuroLines, a novel visualization technique designed for scalable detailed analysis of neuronal connectivity at the nanoscale level. The topology of 3D brain tissue data is abstracted into a multi-scale, relative distance-preserving subway map visualization that allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics aims at reverse-engineering the wiring of the brain. Reconstructing and analyzing the detailed connectivity of neurons and neurites (axons, dendrites) will be crucial for understanding the brain and its development and diseases. However, the enormous scale and complexity of nanoscale neuronal connectivity pose big challenges to existing visualization techniques in terms of scalability. NeuroLines offers a scalable visualization framework that can interactively render thousands of neurites, and that supports the detailed analysis of neuronal structures and their connectivity. We describe and analyze the design of NeuroLines based on two real-world use-cases of our collaborators in developmental neuroscience, and investigate its scalability to large-scale neuronal connectivity data.
Measurements of Turbulence at Two Tidal Energy Sites in Puget Sound, WA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomson, Jim; Polagye, Brian; Durgesh, Vibhav
2012-06-05
Field measurements of turbulence are pre- sented from two sites in Puget Sound, WA (USA) that are proposed for electrical power generation using tidal current turbines. Rapidly sampled data from multiple acoustic Doppler instruments are analyzed to obtain statistical mea- sures of fluctuations in both the magnitude and direction of the tidal currents. The resulting turbulence intensities (i.e., the turbulent velocity fluctuations normalized by the harmonic tidal currents) are typically 10% at the hub- heights (i.e., the relevant depth bin) of the proposed turbines. Length and time scales of the turbulence are also analyzed. Large-scale, anisotropic eddies dominate the energymore » spectra, which may be the result of proximity to headlands at each site. At small scales, an isotropic turbulent cascade is observed and used to estimate the dissipation rate of turbulent kinetic energy. Data quality and sampling parameters are discussed, with an emphasis on the removal of Doppler noise from turbulence statistics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei, E-mail: wei@math.msu.edu
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topologicalmore » analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.« less
NASA Astrophysics Data System (ADS)
Schoch, Anna; Blöthe, Jan; Hoffmann, Thomas; Schrott, Lothar
2016-04-01
A large number of sediment budgets have been compiled on different temporal and spatial scales in alpine regions. Detailed sediment budgets based on the quantification of a number of sediment storages (e.g. talus cones, moraine deposits) exist only for a few small scale drainage basins (up to 10² km²). In contrast, large scale sediment budgets (> 10³ km²) consider only long term sediment sinks such as valley fills and lakes. Until now, these studies often neglect small scale sediment storages in the headwaters. However, the significance of these sediment storages have been reported. A quantitative verification whether headwaters function as sediment source regions is lacking. Despite substantial transport energy in mountain environments due to steep gradients and high relief, sediment flux in large river systems is frequently disconnected from alpine headwaters. This leads to significant storage of coarse-grained sediment along the flow path from rockwall source regions to large sedimentary sinks in major alpine valleys. To improve the knowledge on sediment budgets in large scale alpine catchments and to bridge the gap between small and large scale sediment budgets, we apply a multi-method approach comprising investigations on different spatial scales in the Upper Rhone Basin (URB). The URB is the largest inneralpine basin in the European Alps with a size of > 5400 km². It is a closed system with Lake Geneva acting as an ultimate sediment sink for suspended and clastic sediment. We examine the spatial pattern and volumes of sediment storages as well as the morphometry on the local and catchment-wide scale. We mapped sediment storages and bedrock in five sub-regions of the study area (Goms, Lötschen valley, Val d'Illiez, Vallée de la Liène, Turtmann valley) in the field and from high-resolution remote sensing imagery to investigate the spatial distribution of different sediment storage types (e.g. talus deposits, debris flow cones, alluvial fans). These sub-regions cover all three litho-tectonic units of the URB (Helvetic nappes, Penninic nappes, External massifs) and different catchment sizes to capture the inherent variability. Different parameters characterizing topography, surface characteristics, and vegetation cover are analyzed for each storage type. The data is then used in geostatistical models (PCA, stepwise logistic regression) to predict the spatial distribution of sediment storage for the whole URB. We further conduct morphometric analyses of the URB to gain information on the varying degree of glacial imprint and postglacial landscape evolution and their control on the spatial distribution of sediment storage in a large scale drainage basin. Geophysical methods (ground penetrating radar and electrical resistivity tomography) are applied on different sediment storage types on the local scale to estimate mean thicknesses. Additional data from published studies are used to complement our dataset. We integrate the local data in the statistical model on the spatial distribution of sediment storages for the whole URB. Hence, we can extrapolate the stored sediment volumes to the regional scale in order to bridge the gap between small and large scale studies.
Using Analytics to Support Petabyte-Scale Science on the NASA Earth Exchange (NEX)
NASA Astrophysics Data System (ADS)
Votava, P.; Michaelis, A.; Ganguly, S.; Nemani, R. R.
2014-12-01
NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. Analytics within NEX occurs at several levels - data, workflows, science and knowledge. At the data level, we are focusing on collecting and analyzing any information that is relevant to efficient acquisition, processing and management of data at the smallest granularity, such as files or collections. This includes processing and analyzing all local and many external metadata that are relevant to data quality, size, provenance, usage and other attributes. This then helps us better understand usage patterns and improve efficiency of data handling within NEX. When large-scale workflows are executed on NEX, we capture information that is relevant to processing and that can be analyzed in order to improve efficiencies in job scheduling, resource optimization, or data partitioning that would improve processing throughput. At this point we also collect data provenance as well as basic statistics of intermediate and final products created during the workflow execution. These statistics and metrics form basic process and data QA that, when combined with analytics algorithms, helps us identify issues early in the production process. We have already seen impact in some petabyte-scale projects, such as global Landsat processing, where we were able to reduce processing times from days to hours and enhance process monitoring and QA. While the focus so far has been mostly on support of NEX operations, we are also building a web-based infrastructure that enables users to perform direct analytics on science data - such as climate predictions or satellite data. Finally, as one of the main goals of NEX is knowledge acquisition and sharing, we began gathering and organizing information that associates users and projects with data, publications, locations and other attributes that can then be analyzed as a part of the NEX knowledge graph and used to greatly improve advanced search capabilities. Overall, we see data analytics at all levels as an important part of NEX as we are continuously seeking improvements in data management, workflow processing, use of resources, usability and science acceleration.
NASA Astrophysics Data System (ADS)
Korres, W.; Reichenau, T. G.; Schneider, K.
2013-08-01
Soil moisture is a key variable in hydrology, meteorology and agriculture. Soil moisture, and surface soil moisture in particular, is highly variable in space and time. Its spatial and temporal patterns in agricultural landscapes are affected by multiple natural (precipitation, soil, topography, etc.) and agro-economic (soil management, fertilization, etc.) factors, making it difficult to identify unequivocal cause and effect relationships between soil moisture and its driving variables. The goal of this study is to characterize and analyze the spatial and temporal patterns of surface soil moisture (top 20 cm) in an intensively used agricultural landscape (1100 km2 northern part of the Rur catchment, Western Germany) and to determine the dominant factors and underlying processes controlling these patterns. A second goal is to analyze the scaling behavior of surface soil moisture patterns in order to investigate how spatial scale affects spatial patterns. To achieve these goals, a dynamically coupled, process-based and spatially distributed ecohydrological model was used to analyze the key processes as well as their interactions and feedbacks. The model was validated for two growing seasons for the three main crops in the investigation area: Winter wheat, sugar beet, and maize. This yielded RMSE values for surface soil moisture between 1.8 and 7.8 vol.% and average RMSE values for all three crops of 0.27 kg m-2 for total aboveground biomass and 0.93 for green LAI. Large deviations of measured and modeled soil moisture can be explained by a change of the infiltration properties towards the end of the growing season, especially in maize fields. The validated model was used to generate daily surface soil moisture maps, serving as a basis for an autocorrelation analysis of spatial patterns and scale. Outside of the growing season, surface soil moisture patterns at all spatial scales depend mainly upon soil properties. Within the main growing season, larger scale patterns that are induced by soil properties are superimposed by the small scale land use pattern and the resulting small scale variability of evapotranspiration. However, this influence decreases at larger spatial scales. Most precipitation events cause temporarily higher surface soil moisture autocorrelation lengths at all spatial scales for a short time even beyond the autocorrelation lengths induced by soil properties. The relation of daily spatial variance to the spatial scale of the analysis fits a power law scaling function, with negative values of the scaling exponent, indicating a decrease in spatial variability with increasing spatial resolution. High evapotranspiration rates cause an increase in the small scale soil moisture variability, thus leading to large negative values of the scaling exponent. Utilizing a multiple regression analysis, we found that 53% of the variance of the scaling exponent can be explained by a combination of an independent LAI parameter and the antecedent precipitation.
Comparison Analysis among Large Amount of SNS Sites
NASA Astrophysics Data System (ADS)
Toriumi, Fujio; Yamamoto, Hitoshi; Suwa, Hirohiko; Okada, Isamu; Izumi, Kiyoshi; Hashimoto, Yasuhiro
In recent years, application of Social Networking Services (SNS) and Blogs are growing as new communication tools on the Internet. Several large-scale SNS sites are prospering; meanwhile, many sites with relatively small scale are offering services. Such small-scale SNSs realize small-group isolated type of communication while neither mixi nor MySpace can do that. However, the studies on SNS are almost about particular large-scale SNSs and cannot analyze whether their results apply for general features or for special characteristics on the SNSs. From the point of view of comparison analysis on SNS, comparison with just several types of those cannot reach a statistically significant level. We analyze many SNS sites with the aim of classifying them by using some approaches. Our paper classifies 50,000 sites for small-scale SNSs and gives their features from the points of network structure, patterns of communication, and growth rate of SNS. The result of analysis for network structure shows that many SNS sites have small-world attribute with short path lengths and high coefficients of their cluster. Distribution of degrees of the SNS sites is close to power law. This result indicates the small-scale SNS sites raise the percentage of users with many friends than mixi. According to the analysis of their coefficients of assortativity, those SNS sites have negative values of assortativity, and that means users with high degree tend to connect users with small degree. Next, we analyze the patterns of user communication. A friend network of SNS is explicit while users' communication behaviors are defined as an implicit network. What kind of relationships do these networks have? To address this question, we obtain some characteristics of users' communication structure and activation patterns of users on the SNS sites. By using new indexes, friend aggregation rate and friend coverage rate, we show that SNS sites with high value of friend coverage rate activate diary postings and their comments. Besides, they become activated when hub users with high degree do not behave actively on the sites with high value of friend aggregation rate and high value of friend coverage rate. On the other hand, activation emerges when hub users behave actively on the sites with low value of friend aggregation rate and high value of friend coverage rate. Finally, we observe SNS sites which are increasing the number of users considerably, from the viewpoint of network structure, and extract characteristics of high growth SNS sites. As a result of discrimination on the basis of the decision tree analysis, we can recognize the high growth SNS sites with a high degree of accuracy. Besides, this approach suggests mixi and the other small-scale SNS sites have different character trait.
Multirelational organization of large-scale social networks in an online world
Szell, Michael; Lambiotte, Renaud; Thurner, Stefan
2010-01-01
The capacity to collect fingerprints of individuals in online media has revolutionized the way researchers explore human society. Social systems can be seen as a nonlinear superposition of a multitude of complex social networks, where nodes represent individuals and links capture a variety of different social relations. Much emphasis has been put on the network topology of social interactions, however, the multidimensional nature of these interactions has largely been ignored, mostly because of lack of data. Here, for the first time, we analyze a complete, multirelational, large social network of a society consisting of the 300,000 odd players of a massive multiplayer online game. We extract networks of six different types of one-to-one interactions between the players. Three of them carry a positive connotation (friendship, communication, trade), three a negative (enmity, armed aggression, punishment). We first analyze these types of networks as separate entities and find that negative interactions differ from positive interactions by their lower reciprocity, weaker clustering, and fatter-tail degree distribution. We then explore how the interdependence of different network types determines the organization of the social system. In particular, we study correlations and overlap between different types of links and demonstrate the tendency of individuals to play different roles in different networks. As a demonstration of the power of the approach, we present the first empirical large-scale verification of the long-standing structural balance theory, by focusing on the specific multiplex network of friendship and enmity relations. PMID:20643965
Multirelational organization of large-scale social networks in an online world.
Szell, Michael; Lambiotte, Renaud; Thurner, Stefan
2010-08-03
The capacity to collect fingerprints of individuals in online media has revolutionized the way researchers explore human society. Social systems can be seen as a nonlinear superposition of a multitude of complex social networks, where nodes represent individuals and links capture a variety of different social relations. Much emphasis has been put on the network topology of social interactions, however, the multidimensional nature of these interactions has largely been ignored, mostly because of lack of data. Here, for the first time, we analyze a complete, multirelational, large social network of a society consisting of the 300,000 odd players of a massive multiplayer online game. We extract networks of six different types of one-to-one interactions between the players. Three of them carry a positive connotation (friendship, communication, trade), three a negative (enmity, armed aggression, punishment). We first analyze these types of networks as separate entities and find that negative interactions differ from positive interactions by their lower reciprocity, weaker clustering, and fatter-tail degree distribution. We then explore how the interdependence of different network types determines the organization of the social system. In particular, we study correlations and overlap between different types of links and demonstrate the tendency of individuals to play different roles in different networks. As a demonstration of the power of the approach, we present the first empirical large-scale verification of the long-standing structural balance theory, by focusing on the specific multiplex network of friendship and enmity relations.
ERIC Educational Resources Information Center
Vincent, Jack E.
Part of a large scale research project to test various theories with regard to their ability to analyze international relations, this monograph presents data on the application of distance theory to patterns of cooperation among nations. Distance theory implies that international relations systems (nations, organizations, individuals, etc.) can be…
Living Room vs. Concert Hall: Patterns of Music Consumption in Flanders
ERIC Educational Resources Information Center
Roose, Henk; Stichele, Alexander Vander
2010-01-01
In this article we probe the interplay between public and private music consumption using a large-scale survey of the Flemish population in Belgium. We analyze whether public and private music consumption have different correlates and to what extent there is convergence between the genres that people listen to at home and at concerts. Results show…
ERIC Educational Resources Information Center
Madhavan, Krishna; Johri, Aditya; Xian, Hanjun; Wang, G. Alan; Liu, Xiaomo
2014-01-01
The proliferation of digital information technologies and related infrastructure has given rise to novel ways of capturing, storing and analyzing data. In this paper, we describe the research and development of an information system called Interactive Knowledge Networks for Engineering Education Research (iKNEER). This system utilizes a framework…
Remote Sensing Analysis of Forest Disturbances
NASA Technical Reports Server (NTRS)
Asner, Gregory P. (Inventor)
2015-01-01
The present invention provides systems and methods to automatically analyze Landsat satellite data of forests. The present invention can easily be used to monitor any type of forest disturbance such as from selective logging, agriculture, cattle ranching, natural hazards (fire, wind events, storms), etc. The present invention provides a large-scale, high-resolution, automated remote sensing analysis of such disturbances.
Remote sensing analysis of forest disturbances
NASA Technical Reports Server (NTRS)
Asner, Gregory P. (Inventor)
2012-01-01
The present invention provides systems and methods to automatically analyze Landsat satellite data of forests. The present invention can easily be used to monitor any type of forest disturbance such as from selective logging, agriculture, cattle ranching, natural hazards (fire, wind events, storms), etc. The present invention provides a large-scale, high-resolution, automated remote sensing analysis of such disturbances.
A Chinaman's Chance in Civil Rights Demonstration: A Case Study.
ERIC Educational Resources Information Center
Sim, Yawsoon
A traffic incident in April of 1975 developed into an unprecedented civil rights demonstration by Chinese residents in New York City's Chinatown in May of that year. This paper attempts to trace the factors which led to this large scale demonstration and analyze the development of decision making in this case. The demonstration was the result of…
ERIC Educational Resources Information Center
Immergluck, Daniel
1998-01-01
Discusses the methodology used to analyze the availability of jobs for residents of a particular neighborhood, examining the spatial mismatch hypothesis in the context of jobs available to young minority males in cities. Considers the use of gravity models and the importance of large-scale data sets. (SLD)
ERIC Educational Resources Information Center
Meulman, Jacqueline J.; Verboon, Peter
1993-01-01
Points of view analysis, as a way to deal with individual differences in multidimensional scaling, was largely supplanted by the weighted Euclidean model. It is argued that the approach deserves new attention, especially as a technique to analyze group differences. A streamlined and integrated process is proposed. (SLD)
ERIC Educational Resources Information Center
Rice, Michael; Gladstone, William; Weir, Michael
2004-01-01
We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a…
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno
2013-01-01
Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…
ERIC Educational Resources Information Center
Fuchs, Christian; Sandoval, Marisol
2008-01-01
Neoliberalism has resulted in a large-scale economization and capitalization of society that has also permeated the academic system. The paper at hand provides the result of a case study that analyzed how students, who are today frequently confronted by the combination of studying and precarious labour and insecure job perspectives, assess the…
ERIC Educational Resources Information Center
Hsaieh, Hsiao-Chin; Yang, Chia-Ling
2014-01-01
While access to higher education has reached gender parity in Taiwan, the phenomenon of gender segregation and stratification by fields of study and by division of labor persist. In this article, we trace the historical evolution of Taiwan's education system and data using large-scale educational databases to analyze the association of…
Analyzing the Gender Gap in Math Achievement: Evidence from a Large-Scale US Sample
ERIC Educational Resources Information Center
Cheema, Jehanzeb R.; Galluzzo, Gary
2013-01-01
The US portion of the Program for International Student Assessment (PISA) 2003 student questionnaire comprising of 4,733 observations was used in a multiple regression framework to predict math achievement from demographic variables, such as gender, race, and socioeconomic status, and two student-specific measures of perception, math anxiety and…
Small High Schools on a Larger Scale: The Impact of School Conversions in Chicago
ERIC Educational Resources Information Center
Kahne, Joseph E.; Sporte, Susan E.; de la Torre, Marisa; Easton, John Q.
2008-01-01
This study examines 4 years of small school reform in Chicago, focusing on schools formed by converting large traditional high schools into small autonomous ones. Analyzing systemwide survey and outcome data, the authors assess the assumptions embedded in the reform's theory of change. They find that these schools are characterized by more…
Transition Points for the Gender Gap in Computer Enjoyment
ERIC Educational Resources Information Center
Christensen, Rhonda; Knezek, Gerald; Overall, Theresa
2005-01-01
Data gathered from 10,000 Texas public school students in Grades 3-12 over the years 2000, 2001, 2002, and 2005 were analyzed to replicate findings first discovered as a byproduct of evaluation of a large scale U.S. Department of Education Technology Innovation Challenge Grant. Initial findings were that girls in Grades 4 and 5 reported enjoying…
Hunt, Geoffrey; Moloney, Molly; Fazio, Adam
2012-01-01
Qualitative research is often conceptualized as inherently small-scale research, primarily conducted by a lone researcher enmeshed in extensive and long-term fieldwork or involving in-depth interviews with a small sample of 20 to 30 participants. In the study of illicit drugs, traditionally this has often been in the form of ethnographies of drug-using subcultures. Such small-scale projects have produced important interpretive scholarship that focuses on the culture and meaning of drug use in situated, embodied contexts. Larger-scale projects are often assumed to be solely the domain of quantitative researchers, using formalistic survey methods and descriptive or explanatory models. In this paper, however, we will discuss qualitative research done on a comparatively larger scale—with in-depth qualitative interviews with hundreds of young drug users. Although this work incorporates some quantitative elements into the design, data collection, and analysis, the qualitative dimension and approach has nevertheless remained central. Larger-scale qualitative research shares some of the challenges and promises of smaller-scale qualitative work including understanding drug consumption from an emic perspective, locating hard-to-reach populations, developing rapport with respondents, generating thick descriptions and a rich analysis, and examining the wider socio-cultural context as a central feature. However, there are additional challenges specific to the scale of qualitative research, which include data management, data overload and problems of handling large-scale data sets, time constraints in coding and analyzing data, and personnel issues including training, organizing and mentoring large research teams. Yet large samples can prove to be essential for enabling researchers to conduct comparative research, whether that be cross-national research within a wider European perspective undertaken by different teams or cross-cultural research looking at internal divisions and differences within diverse communities and cultures. PMID:22308079
Transport Coefficients from Large Deviation Functions
NASA Astrophysics Data System (ADS)
Gao, Chloe; Limmer, David
2017-10-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
NASA Astrophysics Data System (ADS)
Boyd, O. S.; Dreger, D. S.; Gritto, R.
2015-12-01
Enhanced Geothermal Systems (EGS) resource development requires knowledge of subsurface physical parameters to quantify the evolution of fracture networks. We investigate seismicity in the vicinity of the EGS development at The Geysers Prati-32 injection well to determine moment magnitude, focal mechanism, and kinematic finite-source models with the goal of developing a rupture area scaling relationship for the Geysers and specifically for the Prati-32 EGS injection experiment. Thus far we have analyzed moment tensors of M ≥ 2 events, and are developing the capability to analyze the large numbers of events occurring as a result of the fluid injection and to push the analysis to smaller magnitude earthquakes. We have also determined finite-source models for five events ranging in magnitude from M 3.7 to 4.5. The scaling relationship between rupture area and moment magnitude of these events resembles that of a published empirical relationship derived for events from M 4.5 to 8.3. We plan to develop a scaling relationship in which moment magnitude and corner frequency are predictor variables for source rupture area constrained by the finite-source modeling. Inclusion of corner frequency in the empirical scaling relationship is proposed to account for possible variations in stress drop. If successful, we will use this relationship to extrapolate to the large numbers of events in the EGS seismicity cloud to estimate the coseismic fracture density. We will present the moment tensor and corner frequency results for the micro earthquakes, and for select events, finite-source models. Stress drop inferred from corner frequencies and from finite-source modeling will be compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katkov, Ivan Yu.; Sil'chenko, Olga K.; Afanasiev, Victor L., E-mail: katkov.ivan@gmail.com, E-mail: olga@sai.msu.su, E-mail: vafan@sao.ru
We have obtained and analyzed long-slit spectral data for the lenticular galaxy IC 719. In this gas-rich S0 galaxy, its large-scale gaseous disk counterrotates the global stellar disk. Moreover, in the IC 719 disk, we have detected a secondary stellar component corotating the ionized gas. By using emission line intensity ratios, we have proven the gas excitation by young stars and thus claim current star formation, the most intense in a ring-like zone at a radius of 10'' (1.4 kpc). The oxygen abundance of the gas in the star-forming ring is about half of the solar abundance. Since the stellarmore » disk remains dynamically cool, we conclude that smooth prolonged accretion of the external gas from a neighboring galaxy provides the current building of the thin large-scale stellar disk.« less
A Framework for Spatial Interaction Analysis Based on Large-Scale Mobile Phone Data
Li, Weifeng; Cheng, Xiaoyun; Guo, Gaohua
2014-01-01
The overall understanding of spatial interaction and the exact knowledge of its dynamic evolution are required in the urban planning and transportation planning. This study aimed to analyze the spatial interaction based on the large-scale mobile phone data. The newly arisen mass dataset required a new methodology which was compatible with its peculiar characteristics. A three-stage framework was proposed in this paper, including data preprocessing, critical activity identification, and spatial interaction measurement. The proposed framework introduced the frequent pattern mining and measured the spatial interaction by the obtained association. A case study of three communities in Shanghai was carried out as verification of proposed method and demonstration of its practical application. The spatial interaction patterns and the representative features proved the rationality of the proposed framework. PMID:25435865
NASA Astrophysics Data System (ADS)
Fathy, Ibrahim
2016-07-01
This paper presents a statistical study of different types of large-scale geomagnetic pulsation (Pc3, Pc4, Pc5 and Pi2) detected simultaneously by two MAGDAS stations located at Fayum (Geo. Coordinates 29.18 N and 30.50 E) and Aswan (Geo. Coordinates 23.59 N and 32.51 E) in Egypt. The second order butter-worth band-pass filter has been used to filter and analyze the horizontal H-component of the geomagnetic field in one-second data. The data was collected during the solar minimum of the current solar cycle 24. We list the most energetic pulsations detected by the two stations instantaneously, in addition; the average amplitude of the pulsation signals was calculated.
Searches for cosmic-ray electron anisotropies with the Fermi Large Area Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Ajello, M.; Bechtol, K.
The Large Area Telescope on board the Fermi satellite (Fermi LAT) detected more than 1.6x10{sup 6} cosmic-ray electrons/positrons with energies above 60 GeV during its first year of operation. The arrival directions of these events were searched for anisotropies of angular scale extending from {approx}10 deg. up to 90 deg., and of minimum energy extending from 60 GeV up to 480 GeV. Two independent techniques were used to search for anisotropies, both resulting in null results. Upper limits on the degree of the anisotropy were set that depended on the analyzed energy range and on the anisotropy's angular scale. Themore » upper limits for a dipole anisotropy ranged from {approx}0.5% to {approx}10%.« less
Searches for cosmic-ray electron anisotropies with the Fermi Large Area Telescope
Ackermann, M.
2010-11-01
The Large Area Telescope on board the Fermi satellite (Fermi LAT) detected more than 1.6 × 10 6 cosmic-ray electrons/positrons with energies above 60 GeV during its first year of operation. The arrival directions of these events were searched for anisotropies of angular scale extending from ~ 10 ° up to 90°, and of minimum energy extending from 60 GeV up to 480 GeV. Two independent techniques were used to search for anisotropies, both resulting in null results. Upper limits on the degree of the anisotropy were set that depended on the analyzed energy range and on the anisotropy’s angularmore » scale. The upper limits for a dipole anisotropy ranged from ~ 0.5 % to ~ 10 % .« less
Out-migration and depopulation of the Russian North during the 1990s.
Heleniak, T
1999-01-01
The large-scale out-migration from Russia's northern regions that has taken place over the course of the 1990s is analyzed. "The study is based on unpublished oblast-level migration data compiled by the Russian Government, field work by the author, as well as two extensive 1998 surveys of recent and potential migrants, respectively. Age, gender, and educational level of migrants are analyzed to determine the extent of change in Northern population structure attributable to migration. A concluding section presents Russian Government projections of the North's population to 2010." excerpt
NASA Technical Reports Server (NTRS)
Patel, V. L.
1975-01-01
Twenty-one geomagnetic storm events during 1966 and 1970 were studied by using simultaneous interplanetary magnetic field and plasma parameters. Explorer 33 and 35 field and plasma data were analyzed on large-scale (hourly) and small-scale (3 min.) during the time interval coincident with initial phase of the geomagnetic storms. The solar-ecliptic Bz component turns southward at the end of the initial phase, thus triggering the main phase decrease in Dst geomagnetic field. When the Bz is already negative, its value becomes further negative. The By component also shows large fluctuations along with Bz. When there are no clear changes in the Bz component, the By shows abrupt changes at the main phase onet. On the small-scale behavior of the magnetic field and electric field (E=-VxB) studied in details for the three events, it is found that the field fluctuations in By, Bz and Ey and Ez are present in the initial phase. These fluctuations become larger just before the main phase of the storm begins. In the largescale behavior field remains quiet because the small scale variations are averaged out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machicoane, Nathanaël; Volk, Romain
We investigate the response of large inertial particle to turbulent fluctuations in an inhomogeneous and anisotropic flow. We conduct a Lagrangian study using particles both heavier and lighter than the surrounding fluid, and whose diameters are comparable to the flow integral scale. Both velocity and acceleration correlation functions are analyzed to compute the Lagrangian integral time and the acceleration time scale of such particles. The knowledge of how size and density affect these time scales is crucial in understanding particle dynamics and may permit stochastic process modelization using two-time models (for instance, Sawford’s). As particles are tracked over long timesmore » in the quasi-totality of a closed flow, the mean flow influences their behaviour and also biases the velocity time statistics, in particular the velocity correlation functions. By using a method that allows for the computation of turbulent velocity trajectories, we can obtain unbiased Lagrangian integral time. This is particularly useful in accessing the scale separation for such particles and to comparing it to the case of fluid particles in a similar configuration.« less
Scaling laws and fluctuations in the statistics of word frequencies
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Altmann, Eduardo G.
2014-11-01
In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.
Images of Bottomside Irregularities Observed at Topside Altitudes
NASA Technical Reports Server (NTRS)
Burke, William J.; Gentile, Louise C.; Shomo, Shannon R.; Roddy, Patrick A.; Pfaff, Robert F.
2012-01-01
We analyzed plasma and field measurements acquired by the Communication/ Navigation Outage Forecasting System (C/NOFS) satellite during an eight-hour period on 13-14 January 2010 when strong to moderate 250 MHz scintillation activity was observed at nearby Scintillation Network Decision Aid (SCINDA) ground stations. C/NOFS consistently detected relatively small-scale density and electric field irregularities embedded within large-scale (approx 100 km) structures at topside altitudes. Significant spectral power measured at the Fresnel (approx 1 km) scale size suggests that C/NOFS was magnetically conjugate to bottomside irregularities similar to those directly responsible for the observed scintillations. Simultaneous ion drift and plasma density measurements indicate three distinct types of large-scale irregularities: (1) upward moving depletions, (2) downward moving depletions, and (3) upward moving density enhancements. The first type has the characteristics of equatorial plasma bubbles; the second and third do not. The data suggest that both downward moving depletions and upward moving density enhancements and the embedded small-scale irregularities may be regarded as Alfvenic images of bottomside irregularities. This interpretation is consistent with predictions of previously reported theoretical modeling and with satellite observations of upward-directed Poynting flux in the low-latitude ionosphere.
Thatcher, T L; Wilson, D J; Wood, E E; Craig, M J; Sextro, R G
2004-08-01
Scale modeling is a useful tool for analyzing complex indoor spaces. Scale model experiments can reduce experimental costs, improve control of flow and temperature conditions, and provide a practical method for pretesting full-scale system modifications. However, changes in physical scale and working fluid (air or water) can complicate interpretation of the equivalent effects in the full-scale structure. This paper presents a detailed scaling analysis of a water tank experiment designed to model a large indoor space, and experimental results obtained with this model to assess the influence of furniture and people in the pollutant concentration field at breathing height. Theoretical calculations are derived for predicting the effects from losses of molecular diffusion, small scale eddies, turbulent kinetic energy, and turbulent mass diffusivity in a scale model, even without Reynolds number matching. Pollutant dispersion experiments were performed in a water-filled 30:1 scale model of a large room, using uranine dye injected continuously from a small point source. Pollutant concentrations were measured in a plane, using laser-induced fluorescence techniques, for three interior configurations: unobstructed, table-like obstructions, and table-like and figure-like obstructions. Concentrations within the measurement plane varied by more than an order of magnitude, even after the concentration field was fully developed. Objects in the model interior had a significant effect on both the concentration field and fluctuation intensity in the measurement plane. PRACTICAL IMPLICATION: This scale model study demonstrates both the utility of scale models for investigating dispersion in indoor environments and the significant impact of turbulence created by furnishings and people on pollutant transport from floor level sources. In a room with no furniture or occupants, the average concentration can vary by about a factor of 3 across the room. Adding furniture and occupants can increase this spatial variation by another factor of 3.
Voting contagion: Modeling and analysis of a century of U.S. presidential elections
de Aguiar, Marcus A. M.
2017-01-01
Social influence plays an important role in human behavior and decisions. Sources of influence can be divided as external, which are independent of social context, or as originating from peers, such as family and friends. An important question is how to disentangle the social contagion by peers from external influences. While a variety of experimental and observational studies provided insight into this problem, identifying the extent of contagion based on large-scale observational data with an unknown network structure remains largely unexplored. By bridging the gap between the large-scale complex systems perspective of collective human dynamics and the detailed approach of social sciences, we present a parsimonious model of social influence, and apply it to a central topic in political science—elections and voting behavior. We provide an analytical expression of the county vote-share distribution, which is in excellent agreement with almost a century of observed U.S. presidential election data. Analyzing the social influence topography over this period reveals an abrupt phase transition from low to high levels of social contagion, and robust differences among regions. These results suggest that social contagion effects are becoming more instrumental in shaping large-scale collective political behavior, with implications on democratic electoral processes and policies. PMID:28542409
epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.
Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa
2016-12-01
Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans; ...
2016-11-09
Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less
NASA Astrophysics Data System (ADS)
Chen, Xinchi; Zhang, Liping; Zou, Lei; Shan, Lijie; She, Dunxian
2018-02-01
The middle and lower reaches of the Yangtze River Basin (MLYR) are greatly affected by frequent drought/flooding events and abrupt alternations of these events in China. The purpose of this study is to analyze the spatial and temporal variability of dryness/wetness based on the data obtained from 75 meteorological stations in the MLYR for the period 1960-2015 and investigate the correlations between dryness/wetness and atmospheric circulation factors. The empirical orthogonal function method was applied in this study based on the monthly Standardized Precipitation Index at a 12-month time scale. The first leading pattern captured the same characteristics of dryness/wetness over the entire MLYR area and accounted for 40.87% of the total variance. Both the second and third leading patterns manifested as regional features of variability over the entire MLYR. The cross-wavelet transform method was applied to explore the potential relationship between the three leading patterns and the large-scale climate factors, and finally the relationships between drought/wetness events and climate factors were also analyzed. Our results indicated that the main patterns of dryness/wetness were primarily associated with the Niño 3.4, Indian Ocean Dipole, Southern Oscillation Index and Northern Oscillation Index, with the first pattern exhibiting noticeable periods and remarkable changes in phase with the indices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans
Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less
Research on TCP/IP network communication based on Node.js
NASA Astrophysics Data System (ADS)
Huang, Jing; Cai, Lixiong
2018-04-01
In the face of big data, long connection and high synchronization, TCP/IP network communication will cause performance bottlenecks due to its blocking multi-threading service model. This paper presents a method of TCP/IP network communication protocol based on Node.js. On the basis of analyzing the characteristics of Node.js architecture and asynchronous non-blocking I/O model, the principle of its efficiency is discussed, and then compare and analyze the network communication model of TCP/IP protocol to expound the reasons why TCP/IP protocol stack is widely used in network communication. Finally, according to the large data and high concurrency in the large-scale grape growing environment monitoring process, a TCP server design based on Node.js is completed. The results show that the example runs stably and efficiently.
Should we trust build-up/wash-off water quality models at the scale of urban catchments?
Bonhomme, Céline; Petrucci, Guido
2017-01-01
Models of runoff water quality at the scale of an urban catchment usually rely on build-up/wash-off formulations obtained through small-scale experiments. Often, the physical interpretation of the model parameters, valid at the small-scale, is transposed to large-scale applications. Testing different levels of spatial variability, the parameter distributions of a water quality model are obtained in this paper through a Monte Carlo Markov Chain algorithm and analyzed. The simulated variable is the total suspended solid concentration at the outlet of a periurban catchment in the Paris region (2.3 km 2 ), for which high-frequency turbidity measurements are available. This application suggests that build-up/wash-off models applied at the catchment-scale do not maintain their physical meaning, but should be considered as "black-box" models. Copyright © 2016 Elsevier Ltd. All rights reserved.
Accounting for Rainfall Spatial Variability in Prediction of Flash Floods
NASA Astrophysics Data System (ADS)
Saharia, M.; Kirstetter, P. E.; Gourley, J. J.; Hong, Y.; Vergara, H. J.
2016-12-01
Flash floods are a particularly damaging natural hazard worldwide in terms of both fatalities and property damage. In the United States, the lack of a comprehensive database that catalogues information related to flash flood timing, location, causative rainfall, and basin geomorphology has hindered broad characterization studies. First a representative and long archive of more than 20,000 flooding events during 2002-2011 is used to analyze the spatial and temporal variability of flash floods. We also derive large number of spatially distributed geomorphological and climatological parameters such as basin area, mean annual precipitation, basin slope etc. to identify static basin characteristics that influence flood response. For the same period, the National Severe Storms Laboratory (NSSL) has produced a decadal archive of Multi-Radar/Multi-Sensor (MRMS) radar-only precipitation rates at 1-km spatial resolution with 5-min temporal resolution. This provides an unprecedented opportunity to analyze the impact of event-level precipitation variability on flooding using a big data approach. To analyze the impact of sub-basin scale rainfall spatial variability on flooding, certain indices such as the first and second scaled moment of rainfall, horizontal gap, vertical gap etc. are computed from the MRMS dataset. Finally, flooding characteristics such as rise time, lag time, and peak discharge are linked to derived geomorphologic, climatologic, and rainfall indices to identify basin characteristics that drive flash floods. Next the model is used to predict flash flooding characteristics all over the continental U.S., specifically over regions poorly covered by hydrological observations. So far studies involving rainfall variability indices have only been performed on a case study basis, and a large scale approach is expected to provide a deeper insight into how sub-basin scale precipitation variability affects flooding. Finally, these findings are validated using the National Weather Service storm reports and a historical flood fatalities database. This analysis framework will serve as a baseline for evaluating distributed hydrologic model simulations such as the Flooded Locations And Simulated Hydrographs Project (FLASH) (http://flash.ou.edu).
Accounting for rainfall spatial variability in the prediction of flash floods
NASA Astrophysics Data System (ADS)
Saharia, Manabendra; Kirstetter, Pierre-Emmanuel; Gourley, Jonathan J.; Hong, Yang; Vergara, Humberto; Flamig, Zachary L.
2017-04-01
Flash floods are a particularly damaging natural hazard worldwide in terms of both fatalities and property damage. In the United States, the lack of a comprehensive database that catalogues information related to flash flood timing, location, causative rainfall, and basin geomorphology has hindered broad characterization studies. First a representative and long archive of more than 15,000 flooding events during 2002-2011 is used to analyze the spatial and temporal variability of flash floods. We also derive large number of spatially distributed geomorphological and climatological parameters such as basin area, mean annual precipitation, basin slope etc. to identify static basin characteristics that influence flood response. For the same period, the National Severe Storms Laboratory (NSSL) has produced a decadal archive of Multi-Radar/Multi-Sensor (MRMS) radar-only precipitation rates at 1-km spatial resolution with 5-min temporal resolution. This provides an unprecedented opportunity to analyze the impact of event-level precipitation variability on flooding using a big data approach. To analyze the impact of sub-basin scale rainfall spatial variability on flooding, certain indices such as the first and second scaled moment of rainfall, horizontal gap, vertical gap etc. are computed from the MRMS dataset. Finally, flooding characteristics such as rise time, lag time, and peak discharge are linked to derived geomorphologic, climatologic, and rainfall indices to identify basin characteristics that drive flash floods. The database has been subjected to rigorous quality control by accounting for radar beam height and percentage snow in basins. So far studies involving rainfall variability indices have only been performed on a case study basis, and a large scale approach is expected to provide a deeper insight into how sub-basin scale precipitation variability affects flooding. Finally, these findings are validated using the National Weather Service storm reports and a historical flood fatalities database. This analysis framework will serve as a baseline for evaluating distributed hydrologic model simulations such as the Flooded Locations And Simulated Hydrographs Project (FLASH) (http://flash.ou.edu).
NASA Astrophysics Data System (ADS)
Hugue, F.; Lapointe, M.; Eaton, B. C.; Lepoutre, A.
2016-01-01
We illustrate an approach to quantify patterns in hydraulic habitat composition and local heterogeneity applicable at low cost over very large river extents, with selectable reach window scales. Ongoing developments in remote sensing and geographical information science massively improve efficiencies in analyzing earth surface features. With the development of new satellite sensors and drone platforms and with the lowered cost of high resolution multispectral imagery, fluvial geomorphology is experiencing a revolution in mapping streams at high resolution. Exploiting the power of aerial or satellite imagery is particularly useful in a riverscape research framework (Fausch et al., 2002), where high resolution sampling of fluvial features and very large coverage extents are needed. This study presents a satellite remote sensing method that requires very limited field calibration data to estimate over various scales ranging from 1 m to many tens or river kilometers (i) spatial composition metrics for key hydraulic mesohabitat types and (ii) reach-scale wetted habitat heterogeneity indices such as the hydromorphological index of diversity (HMID). When the purpose is hydraulic habitat characterization applied over long river networks, the proposed method (although less accurate) is much less computationally expensive and less data demanding than two dimensional computational fluid dynamics (CFD). Here, we illustrate the tools based on a Worldview 2 satellite image of the Kiamika River, near Mont Laurier, Quebec, Canada, specifically over a 17-km river reach below the Kiamika dam. In the first step, a high resolution water depth (D) map is produced from a spectral band ratio (calculated from the multispectral image), calibrated with limited field measurements. Next, based only on known river discharge and estimated cross section depths at time of image capture, empirical-based pseudo-2D hydraulic rules are used to rapidly generate a two-dimensional map of flow velocity (V) over the 17-km Kiamika reach. The joint distribution of D and V variables over wetted zones then is used to reveal structural patterns in hydraulic habitat availability at patch, reach, and segment scales. Here we analyze 156 bivariate (D, V) density function plots estimated over moving reach windows along the satellite scene extent to extract 14 physical habitat metrics (such as river width, mean and modal depths and velocity, variances and covariance in D and V over 1-m pixels, HMID, entropy). A principal component analysis on the set of metrics is then used to cluster river reaches in regard to similarity in their hydraulic habitat composition and heterogeneity. Applications of this approach can include (i) specific fish habitat detection at riverscape scales (e.g., large areas of riffle spawning beds, deeper pools) for regional management, (ii) studying how river habitat heterogeneity is correlated to fish distribution and (iii) guidance for site location for restoration of key habitats or for post regulation monitoring of representative reaches of various types.
Role of Hydrodynamic and Mineralogical Heterogeneities on Reactive Transport Processes.
NASA Astrophysics Data System (ADS)
Luquot, L.; Garcia-Rios, M.; soler Sagarra, J.; Gouze, P.; Martinez-Perez, L.; Carrera, J.
2017-12-01
Predicting reactive transport at large scale, i.e., Darcy- and field- scale, is still challenging considering the number of heterogeneities that may be present from nm- to pore-scale. It is well documented that conventional continuum-scale approaches oversimplify and/or ignore many important aspects of rock structure, chemical reactions, fluid displacement and transport, which, as a consequence, results in uncertainties when applied to field-scale operations. The changes in flow and reactive transport across the different spatial and temporal scales are of central concern in many geological applications such as groundwater systems, geo-energy, rock building heritage and geological storage... In this presentation, we will discuss some laboratory and numerical results on how local heterogeneities (structural, hydrodynamic and mineralogical) can affect the localization and the rate of the reaction processes. Different flow through laboratory experiments using various rock samples will be presented, from simple monomineral rocks such as limestone samples, and more complex rocks composed of different minerals with a large range of kinetic reactions. A new numerical approach based on multirate water mixing approach will be presented and applied to one of the laboratory experiment in order to analyze and distinguish the effect of the mineralogy distribution and the hydrodynamic heterogeneity on the total reaction rate.
At what scale should microarray data be analyzed?
Huang, Shuguang; Yeo, Adeline A; Gelbert, Lawrence; Lin, Xi; Nisenbaum, Laura; Bemis, Kerry G
2004-01-01
The hybridization intensities derived from microarray experiments, for example Affymetrix's MAS5 signals, are very often transformed in one way or another before statistical models are fitted. The motivation for performing transformation is usually to satisfy the model assumptions such as normality and homogeneity in variance. Generally speaking, two types of strategies are often applied to microarray data depending on the analysis need: correlation analysis where all the gene intensities on the array are considered simultaneously, and gene-by-gene ANOVA where each gene is analyzed individually. We investigate the distributional properties of the Affymetrix GeneChip signal data under the two scenarios, focusing on the impact of analyzing the data at an inappropriate scale. The Box-Cox type of transformation is first investigated for the strategy of pooling genes. The commonly used log-transformation is particularly applied for comparison purposes. For the scenario where analysis is on a gene-by-gene basis, the model assumptions such as normality are explored. The impact of using a wrong scale is illustrated by log-transformation and quartic-root transformation. When all the genes on the array are considered together, the dependent relationship between the expression and its variation level can be satisfactorily removed by Box-Cox transformation. When genes are analyzed individually, the distributional properties of the intensities are shown to be gene dependent. Derivation and simulation show that some loss of power is incurred when a wrong scale is used, but due to the robustness of the t-test, the loss is acceptable when the fold-change is not very large.
Nonlinear Control of Large Disturbances in Magnetic Bearing Systems
NASA Technical Reports Server (NTRS)
Jiang, Yuhong; Zmood, R. B.
1996-01-01
In this paper, the nonlinear operation of magnetic bearing control methods is reviewed. For large disturbances, the effects of displacement constraints and power amplifier current and di/dt limits on bearing control system performance are analyzed. The operation of magnetic bearings exhibiting self-excited large scale oscillations have been studied both experimentally and by simulation. The simulation of the bearing system has been extended to include the effects of eddy currents in the actuators, so as to improve the accuracy of the simulation results. The results of these experiments and simulations are compared, and some useful conclusions are drawn for improving bearing system robustness.
Novel doorways and resonances in large-scale classical systems
NASA Astrophysics Data System (ADS)
Franco-Villafañe, J. A.; Flores, J.; Mateos, J. L.; Méndez-Sánchez, R. A.; Novaro, O.; Seligman, T. H.
2011-05-01
We show how the concept of doorway states carries beyond its typical applications and usual concepts. The scale on which it may occur is increased to large classical wave systems. Specifically we analyze the seismic response of sedimentary basins covered by water-logged clays, a rather common situation for urban sites. A model is introduced in which the doorway state is a plane wave propagating in the interface between the sediments and the clay. This wave is produced by the coupling of a Rayleigh and an evanescent SP-wave. This in turn leads to a strong resonant response in the soft clays near the surface of the basin. Our model calculations are compared with measurements during Mexico City earthquakes, showing quite good agreement. This not only provides a transparent explanation of catastrophic resonant seismic response in certain basins but at the same time constitutes up to this date the largest-scale example of the doorway state mechanism in wave scattering. Furthermore the doorway state itself has interesting and rather unusual characteristics.
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1982-01-01
An examination of the possibilities of using Landsat data to simulate NOAA-6 Advanced Very High Resolution Radiometer (AVHRR) data on two channels, as well as using actual NOAA-6 imagery, for large-scale hydrological studies is presented. A running average was obtained of 18 consecutive pixels of 1 km resolution taken by the Landsat scanners were scaled up to 8-bit data and investigated for different gray levels. AVHRR data comprising five channels of 10-bit, band-interleaved information covering 10 deg latitude were analyzed and a suitable pixel grid was chosen for comparison with the Landsat data in a supervised classification format, an unsupervised mode, and with ground truth. Landcover delineation was explored by removing snow, water, and cloud features from the cluster analysis, and resulted in less than 10% difference. Low resolution large-scale data was determined useful for characterizing some landcover features if weekly and/or monthly updates are maintained.
An Application of Hydraulic Tomography to a Large-Scale Fractured Granite Site, Mizunami, Japan.
Zha, Yuanyuan; Yeh, Tian-Chyi J; Illman, Walter A; Tanaka, Tatsuya; Bruines, Patrick; Onoe, Hironori; Saegusa, Hiromitsu; Mao, Deqiang; Takeuchi, Shinji; Wen, Jet-Chau
2016-11-01
While hydraulic tomography (HT) is a mature aquifer characterization technology, its applications to characterize hydrogeology of kilometer-scale fault and fracture zones are rare. This paper sequentially analyzes datasets from two new pumping tests as well as those from two previous pumping tests analyzed by Illman et al. (2009) at a fractured granite site in Mizunami, Japan. Results of this analysis show that datasets from two previous pumping tests at one side of a fault zone as used in the previous study led to inaccurate mapping of fracture and fault zones. Inclusion of the datasets from the two new pumping tests (one of which was conducted on the other side of the fault) yields locations of the fault zone consistent with those based on geological mapping. The new datasets also produce a detailed image of the irregular fault zone, which is not available from geological investigation alone and the previous study. As a result, we conclude that if prior knowledge about geological structures at a field site is considered during the design of HT surveys, valuable non-redundant datasets about the fracture and fault zones can be collected. Only with these non-redundant data sets, can HT then be a viable and robust tool for delineating fracture and fault distributions over kilometer scales, even when only a limited number of boreholes are available. In essence, this paper proves that HT is a new tool for geologists, geophysicists, and engineers for mapping large-scale fracture and fault zone distributions. © 2016, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
NASA Astrophysics Data System (ADS)
Kennedy, Scott Warren
A steady decline in the cost of wind turbines and increased experience in their successful operation have brought this technology to the forefront of viable alternatives for large-scale power generation. Methodologies for understanding the costs and benefits of large-scale wind power development, however, are currently limited. In this thesis, a new and widely applicable technique for estimating the social benefit of large-scale wind power production is presented. The social benefit is based upon wind power's energy and capacity services and the avoidance of environmental damages. The approach uses probabilistic modeling techniques to account for the stochastic interaction between wind power availability, electricity demand, and conventional generator dispatch. A method for including the spatial smoothing effect of geographically dispersed wind farms is also introduced. The model has been used to analyze potential offshore wind power development to the south of Long Island, NY. If natural gas combined cycle (NGCC) and integrated gasifier combined cycle (IGCC) are the alternative generation sources, wind power exhibits a negative social benefit due to its high capacity cost and the relatively low emissions of these advanced fossil-fuel technologies. Environmental benefits increase significantly if charges for CO2 emissions are included. Results also reveal a diminishing social benefit as wind power penetration increases. The dependence of wind power benefits on natural gas and coal prices is also discussed. In power systems with a high penetration of wind generated electricity, the intermittent availability of wind power may influence hourly spot prices. A price responsive electricity demand model is introduced that shows a small increase in wind power value when consumers react to hourly spot prices. The effectiveness of this mechanism depends heavily on estimates of the own- and cross-price elasticities of aggregate electricity demand. This work makes a valuable contribution by synthesizing information from research in power market economics, power system reliability, and environmental impact assessment, to develop a comprehensive methodology for analyzing wind power in the context of long-term energy planning.
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.
Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan
2013-06-27
Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html.
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing
2013-01-01
Background Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Results Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Conclusions Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html. PMID:23802613
Parameter identification of civil engineering structures
NASA Technical Reports Server (NTRS)
Juang, J. N.; Sun, C. T.
1980-01-01
This paper concerns the development of an identification method required in determining structural parameter variations for systems subjected to an extended exposure to the environment. The concept of structural identifiability of a large scale structural system in the absence of damping is presented. Three criteria are established indicating that a large number of system parameters (the coefficient parameters of the differential equations) can be identified by a few actuators and sensors. An eight-bay-fifteen-story frame structure is used as example. A simple model is employed for analyzing the dynamic response of the frame structure.
Professionalism, bureaucracy and patriotism: the VA as a health care megasystem.
Rosenheck, R
The Veterans Administration supports the largest integrated psychiatry service in the country. As our oldest and largest "megasystem," this service offers a unique opportunity for examining distinctive features of such large health care delivery systems. Characteristic experiences of mental health professionals in this system are described and the system is analyzed in terms of its organizational tasks, structure and cultures. Psychiatry will be practiced, in the future, in similarly large scale organizations. Understanding the nature and workings of such organizations is likely to become essential to effective and satisfying professional work.
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis
2012-01-01
Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand. PMID:22276739
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis.
Tu, Jing; Ge, Qinyu; Wang, Shengqin; Wang, Lei; Sun, Beili; Yang, Qi; Bai, Yunfei; Lu, Zuhong
2012-01-25
The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Pullum, Laura L; Steed, Chad A
In this position paper, we describe the design and implementation of the Oak Ridge Bio-surveillance Toolkit (ORBiT): a collection of novel statistical and machine learning tools implemented for (1) integrating heterogeneous traditional (e.g. emergency room visits, prescription sales data, etc.) and non-traditional (social media such as Twitter and Instagram) data sources, (2) analyzing large-scale datasets and (3) presenting the results from the analytics as a visual interface for the end-user to interact and provide feedback. We present examples of how ORBiT can be used to summarize ex- tremely large-scale datasets effectively and how user interactions can translate into the datamore » analytics process for bio-surveillance. We also present a strategy to estimate parameters relevant to dis- ease spread models from near real time data feeds and show how these estimates can be integrated with disease spread models for large-scale populations. We conclude with a perspective on how integrating data and visual analytics could lead to better forecasting and prediction of disease spread as well as improved awareness of disease susceptible regions.« less
Application of regional climate models to the Indian winter monsoon over the western Himalayas.
Dimri, A P; Yasunari, T; Wiltshire, A; Kumar, P; Mathison, C; Ridley, J; Jacob, D
2013-12-01
The Himalayan region is characterized by pronounced topographic heterogeneity and land use variability from west to east, with a large variation in regional climate patterns. Over the western part of the region, almost one-third of the annual precipitation is received in winter during cyclonic storms embedded in westerlies, known locally as the western disturbance. In the present paper, the regional winter climate over the western Himalayas is analyzed from simulations produced by two regional climate models (RCMs) forced with large-scale fields from ERA-Interim. The analysis was conducted by the composition of contrasting (wet and dry) winter precipitation years. The findings showed that RCMs could simulate the regional climate of the western Himalayas and represent the atmospheric circulation during extreme precipitation years in accordance with observations. The results suggest the important role of topography in moisture fluxes, transport and vertical flows. Dynamical downscaling with RCMs represented regional climates at the mountain or even event scale. However, uncertainties of precipitation scale and liquid-solid precipitation ratios within RCMs are still large for the purposes of hydrological and glaciological studies. Copyright © 2013 Elsevier B.V. All rights reserved.
Density-dependent clustering: I. Pulling back the curtains on motions of the BAO peak
NASA Astrophysics Data System (ADS)
Neyrinck, Mark C.; Szapudi, István; McCullagh, Nuala; Szalay, Alexander S.; Falck, Bridget; Wang, Jie
2018-05-01
The most common statistic used to analyze large-scale structure surveys is the correlation function, or power spectrum. Here, we show how `slicing' the correlation function on local density brings sensitivity to interesting non-Gaussian features in the large-scale structure, such as the expansion or contraction of baryon acoustic oscillations (BAO) according to the local density. The sliced correlation function measures the large-scale flows that smear out the BAO, instead of just correcting them as reconstruction algorithms do. Thus, we expect the sliced correlation function to be useful in constraining the growth factor, and modified gravity theories that involve the local density. Out of the studied cases, we find that the run of the BAO peak location with density is best revealed when slicing on a ˜40 h-1 Mpc filtered density. But slicing on a ˜100 h-1 Mpc filtered density may be most useful in distinguishing between underdense and overdense regions, whose BAO peaks are separated by a substantial ˜5 h-1 Mpc at z = 0. We also introduce `curtain plots' showing how local densities drive particle motions toward or away from each other over the course of an N-body simulation.
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.
Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van
2017-06-01
In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.
A large-scale clinical validation of an integrated monitoring system in the emergency department.
Clifton, David A; Wong, David; Clifton, Lei; Wilson, Sarah; Way, Rob; Pullinger, Richard; Tarassenko, Lionel
2013-07-01
We consider an integrated patient monitoring system, combining electronic patient records with high-rate acquisition of patient physiological data. There remain many challenges in increasing the robustness of "e-health" applications to a level at which they are clinically useful, particularly in the use of automated algorithms used to detect and cope with artifact in data contained within the electronic patient record, and in analyzing and communicating the resultant data for reporting to clinicians. There is a consequential "plague of pilots," in which engineering prototype systems do not enter into clinical use. This paper describes an approach in which, for the first time, the Emergency Department (ED) of a major research hospital has adopted such systems for use during a large clinical trial. We describe the disadvantages of existing evaluation metrics when applied to such large trials, and propose a solution suitable for large-scale validation. We demonstrate that machine learning technologies embedded within healthcare information systems can provide clinical benefit, with the potential to improve patient outcomes in the busy environment of a major ED and other high-dependence areas of patient care.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
1996-04-01
time systems . The focus is on the study of ’building-blocks’ for the construction of reliable and efficient systems. Our works falls into three...Members of MIT’s Theory of Distributed Systems group have continued their work on modelling, designing, verifying and analyzing distributed and real
Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model
ERIC Educational Resources Information Center
Von Davier, Matthias; Yamamoto, Kentaro
2004-01-01
The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…
SPEECH TO FACULTY OF HARVARD-BOSTON SUMMER PROGRAM AT PREPLANNING MEETINGS.
ERIC Educational Resources Information Center
HAIZLIP, HAROLD
THE AREA TO WHICH THIS GROUP OF TEACHERS WILL BE SENT IS CHARACTERIZED BY ITS LARGE INFLUX OF POOR NEGRO FAMILIES WITH POOR CULTURAL BACKGROUNDS. SOME OF THE PROBLEMS OF THIS AREA WILL REQUIRE A BROAD-SCALE, CAREFULLY ANALYZED, AND PLANNED ATTACK WITHIN AND BY PUBLIC SCHOOLS. EVERY SCHOOL HAS A HIDDEN OR SUBLIMINAL CURRICULUM WHICH TEACHES, IN…
H.E. Erickson; E.H. Helmer; T.J. Brandeis; A.E. Lugo
2014-01-01
Litter chemistry varies across landscapes according to factors rarely examined simultaneously. We analyzed 11 elements in forest floor (fallen) leaves and additional litter components from 143 forest inventory plots systematically located across Puerto Rico, a tropical island recovering from large-scale forest clearing. We assessed whether three existing, independently...
ERIC Educational Resources Information Center
Kim, Ah-Young
2015-01-01
Previous research in cognitive diagnostic assessment (CDA) of L2 reading ability has been frequently conducted using large-scale English proficiency exams (e.g., TOEFL, MELAB). Using CDA, it is possible to analyze individual learners' strengths and weaknesses in multiple attributes (i.e., knowledge, skill, strategy) measured at the item level.…
NASA Technical Reports Server (NTRS)
Salstein, D. A.; Rosen, R. D.
1982-01-01
A study using the analyses produced from the assimilation cycle of parallel model runs that both include and withhold satellite data was undertaken. The analyzed state of the atmosphere is performed using data from a certain test period during the first Special Observing Period (SOP) of the Global Weather Experiment (FGGE).
ERIC Educational Resources Information Center
Schockaert, Frederik
2014-01-01
School districts at times need to implement structural and programmatic changes requiring students to attend a different school, which tends to elicit strong parental emotions. This qualitative study analyzes how parents in one suburban Rhode Island district responded to a large-scale redistricting at the elementary level in order to (a) attain a…
Paradoxes of Solidarity: Democracy and Colonial Legacies in Swedish Popular Education
ERIC Educational Resources Information Center
Dahlstedt, Magnus; Nordvall, Henrik
2011-01-01
Over the years, there have been several attempts to spread the "Swedish model" of popular education, that is, study circles and folk high schools, to countries in other parts of the world. In this article, the authors analyze the large-scale project of establishing folk development colleges in Tanzania in the 1970s and 1980s, by…
Signaling in large-scale neural networks.
Berg, Rune W; Hounsgaard, Jørn
2009-02-01
We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons.
AEL Study of KERA Implementation in Four Rural Kentucky School Districts. 1993-94 Annual Report.
ERIC Educational Resources Information Center
Coe, Pamelia; And Others
A 5-year qualitative study of implementation of the Kentucky Education Reform Act (KERA) analyzes the effects on four rural school districts of large-scale changes in state policy. This annual report of the project focuses on five key KERA "strands." First, KERA mandates that grades K-3 be replaced with an ungraded primary program…
Harrington, Rebecca M.; Kwiatek, Grzegorz; Moran, Seth C.
2015-01-01
We analyze a group of 6073 low-frequency earthquakes recorded during a week-long temporary deployment of broadband seismometers at distances of less than 3 km from the crater at Mount St. Helens in September of 2006. We estimate the seismic moment (M0) and spectral corner frequency (f0) using a spectral ratio approach for events with a high signal-to-noise (SNR) ratio that have a cross-correlation coefficient of 0.8 or greater with at least five other events. A cluster analysis of cross-correlation values indicates that the group of 421 events meeting the SNR and cross-correlation criteria forms eight event families that exhibit largely self-similar scaling. We estimate the M0 and f0 values of the 421 events and calculate their static stress drop and scaled energy (ER/M0) values. The estimated values suggest self-similar scaling within families, as well as between five of eight families (i.e., and constant). We speculate that differences in scaled energy values for the two families with variable scaling may result from a lack of resolution in the velocity model. The observation of self-similar scaling is the first of its kind for such a large group of low-frequency volcanic tectonic events occurring during a single active dome extrusion eruption.
Increasing Scalability of Researcher Network Extraction from the Web
NASA Astrophysics Data System (ADS)
Asada, Yohei; Matsuo, Yutaka; Ishizuka, Mitsuru
Social networks, which describe relations among people or organizations as a network, have recently attracted attention. With the help of a social network, we can analyze the structure of a community and thereby promote efficient communications within it. We investigate the problem of extracting a network of researchers from the Web, to assist efficient cooperation among researchers. Our method uses a search engine to get the cooccurences of names of two researchers and calculates the streangth of the relation between them. Then we label the relation by analyzing the Web pages in which these two names cooccur. Research on social network extraction using search engines as ours, is attracting attention in Japan as well as abroad. However, the former approaches issue too many queries to search engines to extract a large-scale network. In this paper, we propose a method to filter superfluous queries and facilitates the extraction of large-scale networks. By this method we are able to extract a network of around 3000-nodes. Our experimental results show that the proposed method reduces the number of queries significantly while preserving the quality of the network as compared to former methods.
Managing Large Scale Project Analysis Teams through a Web Accessible Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.
2008-01-01
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
Turbulent statistics in flow field due to interaction of two plane parallel jets
NASA Astrophysics Data System (ADS)
Bisoi, Mukul; Das, Manab Kumar; Roy, Subhransu; Patel, Devendra Kumar
2017-12-01
Turbulent characteristics of flow fields due to the interaction of two plane parallel jets separated by the jet width distance are studied. Numerical simulation is carried out by large eddy simulation with a dynamic Smagorinsky model for the sub-grid scale stresses. The energy spectra are observed to follow the -5/3 power law for the inertial sub-range. A proper orthogonal decomposition study indicates that the energy carrying large coherent structures is present close to the nozzle exit. It is shown that these coherent structures interact with each other and finally disintegrate into smaller vortices further downstream. The turbulent fluctuations in the longitudinal and lateral directions are shown to follow a similarity. The mean flow at the same time also maintains a close similarity. Prandtl's mixing length, the Taylor microscale, and the Kolmogorov length scales are shown along the lateral direction for different downstream locations. The autocorrelation in the longitudinal and transverse directions is seen to follow a similarity profile. By plotting the probability density function, the skewness and the flatness (kurtosis) are analyzed. The Reynolds stress anisotropy tensor is calculated, and the anisotropy invariant map known as Lumley's triangle is presented and analyzed.
A study on efficient detection of network-based IP spoofing DDoS and malware-infected Systems.
Seo, Jung Woo; Lee, Sang Jin
2016-01-01
Large-scale network environments require effective detection and response methods against DDoS attacks. Depending on the advancement of IT infrastructure such as the server or network equipment, DDoS attack traffic arising from a few malware-infected systems capable of crippling the organization's internal network has become a significant threat. This study calculates the frequency of network-based packet attributes and analyzes the anomalies of the attributes in order to detect IP-spoofed DDoS attacks. Also, a method is proposed for the effective detection of malware infection systems triggering IP-spoofed DDoS attacks on an edge network. Detection accuracy and performance of the collected real-time traffic on a core network is analyzed thru the use of the proposed algorithm, and a prototype was developed to evaluate the performance of the algorithm. As a result, DDoS attacks on the internal network were detected in real-time and whether or not IP addresses were spoofed was confirmed. Detecting hosts infected by malware in real-time allowed the execution of intrusion responses before stoppage of the internal network caused by large-scale attack traffic.
The Cell Collective: Toward an open and collaborative approach to systems biology
2012-01-01
Background Despite decades of new discoveries in biomedical research, the overwhelming complexity of cells has been a significant barrier to a fundamental understanding of how cells work as a whole. As such, the holistic study of biochemical pathways requires computer modeling. Due to the complexity of cells, it is not feasible for one person or group to model the cell in its entirety. Results The Cell Collective is a platform that allows the world-wide scientific community to create these models collectively. Its interface enables users to build and use models without specifying any mathematical equations or computer code - addressing one of the major hurdles with computational research. In addition, this platform allows scientists to simulate and analyze the models in real-time on the web, including the ability to simulate loss/gain of function and test what-if scenarios in real time. Conclusions The Cell Collective is a web-based platform that enables laboratory scientists from across the globe to collaboratively build large-scale models of various biological processes, and simulate/analyze them in real time. In this manuscript, we show examples of its application to a large-scale model of signal transduction. PMID:22871178
The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data.
Thompson, Paul M; Stein, Jason L; Medland, Sarah E; Hibar, Derrek P; Vasquez, Alejandro Arias; Renteria, Miguel E; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J; Martin, Nicholas G; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C; Andreassen, Ole A; Apostolova, Liana G; Appel, Katja; Armstrong, Nicola J; Aribisala, Benjamin; Bastin, Mark E; Bauer, Michael; Bearden, Carrie E; Bergmann, Orjan; Binder, Elisabeth B; Blangero, John; Bockholt, Henry J; Bøen, Erlend; Bois, Catherine; Boomsma, Dorret I; Booth, Tom; Bowman, Ian J; Bralten, Janita; Brouwer, Rachel M; Brunner, Han G; Brohawn, David G; Buckner, Randy L; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R; Calhoun, Vince D; Cannon, Dara M; Cantor, Rita M; Carless, Melanie A; Caseras, Xavier; Cavalleri, Gianpiero L; Chakravarty, M Mallar; Chang, Kiki D; Ching, Christopher R K; Christoforou, Andrea; Cichon, Sven; Clark, Vincent P; Conrod, Patricia; Coppola, Giovanni; Crespo-Facorro, Benedicto; Curran, Joanne E; Czisch, Michael; Deary, Ian J; de Geus, Eco J C; den Braber, Anouk; Delvecchio, Giuseppe; Depondt, Chantal; de Haan, Lieuwe; de Zubicaray, Greig I; Dima, Danai; Dimitrova, Rali; Djurovic, Srdjan; Dong, Hongwei; Donohoe, Gary; Duggirala, Ravindranath; Dyer, Thomas D; Ehrlich, Stefan; Ekman, Carl Johan; Elvsåshagen, Torbjørn; Emsell, Louise; Erk, Susanne; Espeseth, Thomas; Fagerness, Jesen; Fears, Scott; Fedko, Iryna; Fernández, Guillén; Fisher, Simon E; Foroud, Tatiana; Fox, Peter T; Francks, Clyde; Frangou, Sophia; Frey, Eva Maria; Frodl, Thomas; Frouin, Vincent; Garavan, Hugh; Giddaluru, Sudheer; Glahn, David C; Godlewska, Beata; Goldstein, Rita Z; Gollub, Randy L; Grabe, Hans J; Grimm, Oliver; Gruber, Oliver; Guadalupe, Tulio; Gur, Raquel E; Gur, Ruben C; Göring, Harald H H; Hagenaars, Saskia; Hajek, Tomas; Hall, Geoffrey B; Hall, Jeremy; Hardy, John; Hartman, Catharina A; Hass, Johanna; Hatton, Sean N; Haukvik, Unn K; Hegenscheid, Katrin; Heinz, Andreas; Hickie, Ian B; Ho, Beng-Choon; Hoehn, David; Hoekstra, Pieter J; Hollinshead, Marisa; Holmes, Avram J; Homuth, Georg; Hoogman, Martine; Hong, L Elliot; Hosten, Norbert; Hottenga, Jouke-Jan; Hulshoff Pol, Hilleke E; Hwang, Kristy S; Jack, Clifford R; Jenkinson, Mark; Johnston, Caroline; Jönsson, Erik G; Kahn, René S; Kasperaviciute, Dalia; Kelly, Sinead; Kim, Sungeun; Kochunov, Peter; Koenders, Laura; Krämer, Bernd; Kwok, John B J; Lagopoulos, Jim; Laje, Gonzalo; Landen, Mikael; Landman, Bennett A; Lauriello, John; Lawrie, Stephen M; Lee, Phil H; Le Hellard, Stephanie; Lemaître, Herve; Leonardo, Cassandra D; Li, Chiang-Shan; Liberg, Benny; Liewald, David C; Liu, Xinmin; Lopez, Lorna M; Loth, Eva; Lourdusamy, Anbarasu; Luciano, Michelle; Macciardi, Fabio; Machielsen, Marise W J; Macqueen, Glenda M; Malt, Ulrik F; Mandl, René; Manoach, Dara S; Martinot, Jean-Luc; Matarin, Mar; Mather, Karen A; Mattheisen, Manuel; Mattingsdal, Morten; Meyer-Lindenberg, Andreas; McDonald, Colm; McIntosh, Andrew M; McMahon, Francis J; McMahon, Katie L; Meisenzahl, Eva; Melle, Ingrid; Milaneschi, Yuri; Mohnke, Sebastian; Montgomery, Grant W; Morris, Derek W; Moses, Eric K; Mueller, Bryon A; Muñoz Maniega, Susana; Mühleisen, Thomas W; Müller-Myhsok, Bertram; Mwangi, Benson; Nauck, Matthias; Nho, Kwangsik; Nichols, Thomas E; Nilsson, Lars-Göran; Nugent, Allison C; Nyberg, Lars; Olvera, Rene L; Oosterlaan, Jaap; Ophoff, Roel A; Pandolfo, Massimo; Papalampropoulou-Tsiridou, Melina; Papmeyer, Martina; Paus, Tomas; Pausova, Zdenka; Pearlson, Godfrey D; Penninx, Brenda W; Peterson, Charles P; Pfennig, Andrea; Phillips, Mary; Pike, G Bruce; Poline, Jean-Baptiste; Potkin, Steven G; Pütz, Benno; Ramasamy, Adaikalavan; Rasmussen, Jerod; Rietschel, Marcella; Rijpkema, Mark; Risacher, Shannon L; Roffman, Joshua L; Roiz-Santiañez, Roberto; Romanczuk-Seiferth, Nina; Rose, Emma J; Royle, Natalie A; Rujescu, Dan; Ryten, Mina; Sachdev, Perminder S; Salami, Alireza; Satterthwaite, Theodore D; Savitz, Jonathan; Saykin, Andrew J; Scanlon, Cathy; Schmaal, Lianne; Schnack, Hugo G; Schork, Andrew J; Schulz, S Charles; Schür, Remmelt; Seidman, Larry; Shen, Li; Shoemaker, Jody M; Simmons, Andrew; Sisodiya, Sanjay M; Smith, Colin; Smoller, Jordan W; Soares, Jair C; Sponheim, Scott R; Sprooten, Emma; Starr, John M; Steen, Vidar M; Strakowski, Stephen; Strike, Lachlan; Sussmann, Jessika; Sämann, Philipp G; Teumer, Alexander; Toga, Arthur W; Tordesillas-Gutierrez, Diana; Trabzuni, Daniah; Trost, Sarah; Turner, Jessica; Van den Heuvel, Martijn; van der Wee, Nic J; van Eijk, Kristel; van Erp, Theo G M; van Haren, Neeltje E M; van 't Ent, Dennis; van Tol, Marie-Jose; Valdés Hernández, Maria C; Veltman, Dick J; Versace, Amelia; Völzke, Henry; Walker, Robert; Walter, Henrik; Wang, Lei; Wardlaw, Joanna M; Weale, Michael E; Weiner, Michael W; Wen, Wei; Westlye, Lars T; Whalley, Heather C; Whelan, Christopher D; White, Tonya; Winkler, Anderson M; Wittfeld, Katharina; Woldehawariat, Girma; Wolf, Christiane; Zilles, David; Zwiers, Marcel P; Thalamuthu, Anbupalam; Schofield, Peter R; Freimer, Nelson B; Lawrence, Natalia S; Drevets, Wayne
2014-06-01
The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA's first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way.
[Construction and evaluation of ecological network in Poyang Lake Eco-economic Zone, China.
Chen, Xiao Ping; Chen, Wen Bo
2016-05-01
Large-scale ecological patches play an important role in regional biodiversity conservation. However, with the rapid progress of China's urbanization, human disturbance on the environment is becoming stronger. Large-scale ecological patches will degrade not only in quantity, but also in quality, threatening the connections among them due to isolation and seriously affecting the biodiversity protection. Taking Poyang Lake Eco-economic Zone as a case, this paper established the potential ecological corridors by minimum cost model and GIS technique taking the impacts of landscape types, slope and human disturbance into consideration. Then, based on gravity quantitative model, we analyzed the intensity of ecological interactions between patches, and the potential ecological corridors were divided into two classes for sake of protection. Finally, the important ecological nodes and breaking points were identified, and the structure of the potential ecological network was analyzed. The results showed that forest and cropland were the main landscape types of ecological corridor composition, interaction between ecological patches differed obviously and the structure of the composed regional ecological network was complex with high connectivity and closure. It might provide a scientific basis for the protection of biodiversity and ecological network optimization in Poyang Lake Eco-economic Zone.
NASA Astrophysics Data System (ADS)
Penna, James; Morgan, Kyle; Grubb, Isaac; Jarboe, Thomas
2017-10-01
The Helicity Injected Torus - Steady Inductive 3 (HIT-SI3) experiment forms and maintains spheromaks via Steady Inductive Helicity Injection (SIHI) using discrete injectors that inject magnetic helicity via a non-axisymmetric perturbation and drive toroidally symmetric current. Newer designs for larger SIHI-driven spheromaks incorporate a set of injectors connected to a single external manifold to allow more freedom for the toroidal structure of the applied perturbation. Simulations have been carried out using the NIMROD code to assess the effectiveness of various imposed mode structures and injector schema in driving current via Imposed Dynamo Current Drive (IDCD). The results are presented here for varying flux conserver shapes on a device approximately 1.5 times larger than the current HIT-SI3 experiment. The imposed mode structures and spectra of simulated spheromaks are analyzed in order to examine magnetic structure and stability and determine an optimal regime for IDCD sustainment in a large device. The development of scaling laws for manifold operation is also presented, and simulation results are analyzed and assessed as part of the development path for the large scale device.
Large-Scale Phase Synchrony Reflects Clinical Status After Stroke: An EEG Study.
Kawano, Teiji; Hattori, Noriaki; Uno, Yutaka; Kitajo, Keiichi; Hatakenaka, Megumi; Yagura, Hajime; Fujimoto, Hiroaki; Yoshioka, Tomomi; Nagasako, Michiko; Otomune, Hironori; Miyai, Ichiro
2017-06-01
Stroke-induced focal brain lesions often exert remote effects via residual neural network activity. Electroencephalographic (EEG) techniques can assess neural network modifications after brain damage. Recently, EEG phase synchrony analyses have shown associations between the level of large-scale phase synchrony of brain activity and clinical symptoms; however, few reports have assessed such associations in stroke patients. The aim of this study was to investigate the clinical relevance of hemispheric phase synchrony in stroke patients by calculating its correlation with clinical status. This cross-sectional study included 19 patients with post-acute ischemic stroke admitted for inpatient rehabilitation. Interhemispheric phase synchrony indices (IH-PSIs) were computed in 2 frequency bands (alpha [α], and beta [β]), and associations between indices and scores of the Functional Independence Measure (FIM), the National Institutes of Health Stroke Scale (NIHSS), and the Fugl-Meyer Motor Assessment (FMA) were analyzed. For further assessments of IH-PSIs, ipsilesional intrahemispheric PSIs (IntraH-PSIs) as well as IH- and IntraH-phase lag indices (PLIs) were also evaluated. IH-PSIs correlated significantly with FIM scores and NIHSS scores. In contrast, IH-PSIs did not correlate with FMA scores. IntraH-PSIs correlate with FIM scores after removal of the outlier. The results of analysis with PLIs were consistent with IH-PSIs. The PSIs correlated with performance on the activities of daily living scale but not with scores on a pure motor impairment scale. These results suggest that large-scale phase synchrony represented by IH-PSIs provides a novel surrogate marker for clinical status after stroke.
A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.
Rutledge, Robert G
2011-03-02
Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-07-01
The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.
A Java Program for LRE-Based Real-Time qPCR that Enables Large-Scale Absolute Quantification
Rutledge, Robert G.
2011-01-01
Background Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Findings Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. Conclusions The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples. PMID:21407812
Resources for Functional Genomics Studies in Drosophila melanogaster
Mohr, Stephanie E.; Hu, Yanhui; Kim, Kevin; Housden, Benjamin E.; Perrimon, Norbert
2014-01-01
Drosophila melanogaster has become a system of choice for functional genomic studies. Many resources, including online databases and software tools, are now available to support design or identification of relevant fly stocks and reagents or analysis and mining of existing functional genomic, transcriptomic, proteomic, etc. datasets. These include large community collections of fly stocks and plasmid clones, “meta” information sites like FlyBase and FlyMine, and an increasing number of more specialized reagents, databases, and online tools. Here, we introduce key resources useful to plan large-scale functional genomics studies in Drosophila and to analyze, integrate, and mine the results of those studies in ways that facilitate identification of highest-confidence results and generation of new hypotheses. We also discuss ways in which existing resources can be used and might be improved and suggest a few areas of future development that would further support large- and small-scale studies in Drosophila and facilitate use of Drosophila information by the research community more generally. PMID:24653003
NASA Astrophysics Data System (ADS)
Zhong, Hua; Zhang, Song; Hu, Jian; Sun, Minhong
2017-12-01
This paper deals with the imaging problem for one-stationary bistatic synthetic aperture radar (BiSAR) with high-squint, large-baseline configuration. In this bistatic configuration, accurate focusing of BiSAR data is a difficult issue due to the relatively large range cell migration (RCM), severe range-azimuth coupling, and inherent azimuth-geometric variance. To circumvent these issues, an enhanced azimuth nonlinear chirp scaling (NLCS) algorithm based on an ellipse model is investigated in this paper. In the range processing, a method combining deramp operation and keystone transform (KT) is adopted to remove linear RCM completely and mitigate range-azimuth cross-coupling. In the azimuth focusing, an ellipse model is established to analyze and depict the characteristic of azimuth-variant Doppler phase. Based on the new model, an enhanced azimuth NLCS algorithm is derived to focus one-stationary BiSAR data. Simulating results exhibited at the end of this paper validate the effectiveness of the proposed algorithm.
OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale
NASA Astrophysics Data System (ADS)
Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason
2015-03-01
The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.
NASA Technical Reports Server (NTRS)
Jackson, Karen E.
1990-01-01
Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.
GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data
NASA Astrophysics Data System (ADS)
Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.
2016-12-01
Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.
CaSO4 Scale Inhibition by a Trace Amount of Zinc Ion in Piping System
NASA Astrophysics Data System (ADS)
Mangestiyono, W.; Sutrisno
2017-05-01
Usually, a small steam generator is not complemented by equipment such as demineralization and chlorination process apparatus since the economic aspect was a precedence. Such phenomenon was uncovered in a case study of green tea industrial process in which the boiler capacity was not more than 1 ton/hour. The operation of the small boiler affected the scaling process in its piping system. In a year operation, there was already a large scale of calcium attached to the inner surface of the pipe. Such large scale formed a layer and decreased the overall heat transfer coefficient, prolonged the process time and decreased the production. The aim of the current research was to solve the problem through a laboratory research to inhibit the CaSO4 scale formation by the addition of trace amounts of zinc ion. This research was conducted through a built in-house experimental rig which consisted of a dosing pump for controlling the flow rate and a thermocouple to control the temperature. Synthesis solution was prepared with 3,500 ppm concentration of CaCl2 and Na2SO4. The concentration of zinc was set at 0.00; 5.00 and 10.00 ppm. The data found were characterized by scanning electron microscopy (SEM) to analyze crystal polymorph as the influence of zinc ion addition. The induction time was also investigated to analyze the nucleation time, and it was found on the 9th, 13th, and 19th minute of the zinc ion addition of 0.00, 5.00 and 10.00 ppm. After running for a four-hour duration, the scale grow-rate was found to be 5.799; 5.501 and 4.950 × 10-3 gr/min for 0.00; 5.00 and 10.00 ppm of zinc addition at 50 °C.
NASA Astrophysics Data System (ADS)
Millstein, D.; Brown, N. J.; Zhai, P.; Menon, S.
2012-12-01
We use the WRF/Chem model (Weather Research and Forecasting model with chemistry) and pollutant emissions based on the EPA National Emission Inventories from 2005 and 2008 to model regional climate and air quality over the continental United States. Additionally, 2030 emission scenarios are developed to investigate the effects of future enhancements to solar power generation. Modeling covered 6 summer and 6 winter weeks each year. We model feedback between aerosols and meteorology and thus capture direct and indirect aerosol effects. The grid resolution is 25 km and includes no nesting. Between 2005 and 2008 significant emission reductions were reported in the National Emission Inventory. The 2008 weekday emissions over the continental U.S. of SO2 and NO were reduced from 2005 values by 28% and 16%, respectively. Emission reductions of this magnitude are similar in scale to the potential emission reductions from various energy policy initiatives. By evaluating modeled and observed air quality changes from 2005 to 2008, we analyze how well the model represents the effects of historical emission changes. We also gain insight into how well the model might predict the effects of future emission changes. In addition to direct comparisons of model outputs to ground and satellite observations, we compare observed differences between 2005 and 2008 to corresponding modeled differences. Modeling was extended to future scenarios (2030) to simulate air quality and regional climate effects of large-scale adoption of solar power. The 2030-year was selected to allow time for development of solar generation infrastructure. The 2030 emission scenario was scaled, with separate factors for different economic sectors, from the 2008 National Emissions Inventory. The changes to emissions caused by the introduction of large-scale solar power (here assumed to be 10% of total energy generation) are based on results from a parallel project that used an electricity grid model applied over multiple regions across the country. The regional climate and air quality effects of future large-scale solar power adoption are analyzed in the context of uncertainty quantified by the dynamic evaluation of the historical (2005 and 2008) WRF/Chem simulations.
Colak, Recep; Moser, Flavia; Chu, Jeffrey Shih-Chieh; Schönhuth, Alexander; Chen, Nansheng; Ester, Martin
2010-10-25
Computational prediction of functionally related groups of genes (functional modules) from large-scale data is an important issue in computational biology. Gene expression experiments and interaction networks are well studied large-scale data sources, available for many not yet exhaustively annotated organisms. It has been well established, when analyzing these two data sources jointly, modules are often reflected by highly interconnected (dense) regions in the interaction networks whose participating genes are co-expressed. However, the tractability of the problem had remained unclear and methods by which to exhaustively search for such constellations had not been presented. We provide an algorithmic framework, referred to as Densely Connected Biclustering (DECOB), by which the aforementioned search problem becomes tractable. To benchmark the predictive power inherent to the approach, we computed all co-expressed, dense regions in physical protein and genetic interaction networks from human and yeast. An automatized filtering procedure reduces our output which results in smaller collections of modules, comparable to state-of-the-art approaches. Our results performed favorably in a fair benchmarking competition which adheres to standard criteria. We demonstrate the usefulness of an exhaustive module search, by using the unreduced output to more quickly perform GO term related function prediction tasks. We point out the advantages of our exhaustive output by predicting functional relationships using two examples. We demonstrate that the computation of all densely connected and co-expressed regions in interaction networks is an approach to module discovery of considerable value. Beyond confirming the well settled hypothesis that such co-expressed, densely connected interaction network regions reflect functional modules, we open up novel computational ways to comprehensively analyze the modular organization of an organism based on prevalent and largely available large-scale datasets. Software and data sets are available at http://www.sfu.ca/~ester/software/DECOB.zip.
Northwest Trajectory Analysis Capability: A Platform for Enhancing Computational Biophysics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Elena S.; Stephan, Eric G.; Corrigan, Abigail L.
2008-07-30
As computational resources continue to increase, the ability of computational simulations to effectively complement, and in some cases replace, experimentation in scientific exploration also increases. Today, large-scale simulations are recognized as an effective tool for scientific exploration in many disciplines including chemistry and biology. A natural side effect of this trend has been the need for an increasingly complex analytical environment. In this paper, we describe Northwest Trajectory Analysis Capability (NTRAC), an analytical software suite developed to enhance the efficiency of computational biophysics analyses. Our strategy is to layer higher-level services and introduce improved tools within the user’s familiar environmentmore » without preventing researchers from using traditional tools and methods. Our desire is to share these experiences to serve as an example for effectively analyzing data intensive large scale simulation data.« less
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Gyrodampers for large space structures
NASA Technical Reports Server (NTRS)
Aubrun, J. N.; Margulies, G.
1979-01-01
The problem of controlling the vibrations of a large space structures by the use of actively augmented damping devices distributed throughout the structure is addressed. The gyrodamper which consists of a set of single gimbal control moment gyros which are actively controlled to extract the structural vibratory energy through the local rotational deformations of the structure, is described and analyzed. Various linear and nonlinear dynamic simulations of gyrodamped beams are shown, including results on self-induced vibrations due to sensor noise and rotor imbalance. The complete nonlinear dynamic equations are included. The problem of designing and sizing a system of gyrodampers for a given structure, or extrapolating results for one gyrodamped structure to another is solved in terms of scaling laws. Novel scaling laws for gyro systems are derived, based upon fundamental physical principles, and various examples are given.
The structure and evolution of coronal holes
NASA Technical Reports Server (NTRS)
Timothy, A. F.; Krieger, A. S.; Vaiana, G. S.
1975-01-01
Soft X-ray observations of coronal holes are analyzed to determine the structure, temporal evolution, and rotational properties of those features as well as possible mechanisms which may account for their almost rigid rotational characteristics. It is shown that coronal holes are open features with a divergent magnetic-field configuration resulting from a particular large-scale magnetic-field topology. They are apparently formed when the successive emergence and dispersion of active-region fields produce a swath of unipolar field founded by fields of opposite polarity, and they die when large-scale field patterns emerge which significantly distort the original field configuration. Two types of holes are described (compact and elongated), and three possible rotation mechanisms are considered: a rigidly rotating subphotospheric phenomenon, a linking of high and low latitudes by closed field lines, and an interaction between moving coronal material and open field lines.
[Methods of high-throughput plant phenotyping for large-scale breeding and genetic experiments].
Afonnikov, D A; Genaev, M A; Doroshkov, A V; Komyshev, E G; Pshenichnikova, T A
2016-07-01
Phenomics is a field of science at the junction of biology and informatics which solves the problems of rapid, accurate estimation of the plant phenotype; it was rapidly developed because of the need to analyze phenotypic characteristics in large scale genetic and breeding experiments in plants. It is based on using the methods of computer image analysis and integration of biological data. Owing to automation, new approaches make it possible to considerably accelerate the process of estimating the characteristics of a phenotype, to increase its accuracy, and to remove a subjectivism (inherent to humans). The main technologies of high-throughput plant phenotyping in both controlled and field conditions, their advantages and disadvantages, and also the prospects of their use for the efficient solution of problems of plant genetics and breeding are presented in the review.
Lightweight computational steering of very large scale molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beazley, D.M.; Lomdahl, P.S.
1996-09-01
We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less
Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631
Global detection of live virtual machine migration based on cellular neural networks.
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.
NASA Astrophysics Data System (ADS)
Schuite, Jonathan; Longuevergne, Laurent; Bour, Olivier; Boudin, Frédérick; Durand, Stéphane; Lavenant, Nicolas
2015-12-01
Fractured aquifers which bear valuable water resources are often difficult to characterize with classical hydrogeological tools due to their intrinsic heterogeneities. Here we implement ground surface deformation tools (tiltmetry and optical leveling) to monitor groundwater pressure changes induced by a classical hydraulic test at the Ploemeur observatory. By jointly analyzing complementary time constraining data (tilt) and spatially constraining data (vertical displacement), our results strongly suggest that the use of these surface deformation observations allows for estimating storativity and structural properties (dip, root depth, and lateral extension) of a large hydraulically active fracture, in good agreement with previous studies. Hence, we demonstrate that ground surface deformation is a useful addition to traditional hydrogeological techniques and opens possibilities for characterizing important large-scale properties of fractured aquifers with short-term well tests as a controlled forcing.
Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.
2012-01-01
Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.
Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L.; Costanzo, Michael; Andrews, Brenda; Boone, Charles
2017-01-01
Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae. In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. PMID:28325812
Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L; Costanzo, Michael; Andrews, Brenda; Boone, Charles
2017-05-05
Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. Copyright © 2017 Usaj et al.
From lab to full-scale ultrafiltration in microalgae harvesting
NASA Astrophysics Data System (ADS)
Wenten, I. G.; Steven, S.; Dwiputra, A.; Khoiruddin; Hakim, A. N.
2017-07-01
Ponding system is generally used for microalgae cultivation. However, selection of appropriate technology for the harvesting process is challenging due to the low cell density of cultivated microalgae from the ponding system and the large volume of water to be handled. One of the promising technologies for microalgae harvesting is ultrafiltration (UF). In this study, the performance of UF during harvesting of microalgae in a lab- and a full-scale test is investigated. The performances of both scales are compared and analyzed to provide an understanding of several aspects which affect the yield produced from lab and actual conditions. Furthermore, a unique self-standing non-modular UF is introduced in the full-scale test. The non-modular UF exhibits several advantages, such as simple piping and connection, single pump for filtration and backwashing, and smaller footprint. With those advantages, the non-modular UF could be a promising technology for microalgae harvesting in industrial-scale.
Wolfe, J H; Mihalov, J D; Collard, H R; McKibbin, D D; Frank, L A; Intriligator, D S
1980-01-25
The Ames Research Center Pioneer 11 plasma analyzer experiment provided measurements of the solar wind interaction with Saturn and the character of the plasma environment within Saturn's magnetosphere. It is shown that Saturn has a detached bow shock wave and magnetopause quite similar to those at Earth and Jupiter. The scale size of the interaction region for Saturn is roughly one-third that at Jupiter, but Saturn's magnetosphere is equally responsive to changes in the solar wind dynamic pressure. Saturn's outer magnetosphere is inflated, as evidenced by the observation of large fluxes of corotating plasma. It is postulated that Saturn's magnetosphere may undergo a large expansion when the solar wind pressure is greatly diminished by the presence of Jupiter's extended magnetospheric tail when the two planets are approximately aligned along the same solar radial vector.
Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.
Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A
2017-03-01
Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.
Global Distribution of Density Irregularities in the Equatorial Ionosphere
NASA Technical Reports Server (NTRS)
Kil, Hyosub; Heelis, R. A.
1998-01-01
We analyzed measurements of ion number density made by the retarding potential analyzer aboard the Atmosphere Explorer-E (AE-E) satellite, which was in an approximately circular orbit at an altitude near 300 km in 1977 and later at an altitude near 400 km. Large-scale (greater than 60 km) density measurements in the high-altitude regions show large depletions of bubble-like structures which are confined to narrow local time longitude, and magnetic latitude ranges, while those in the low-altitude regions show relatively small depletions which are broadly distributed,in space. For this reason we considered the altitude regions below 300 km and above 350 km and investigated the global distribution of irregularities using the rms deviation delta N/N over a path length of 18 km as an indicator of overall irregularity intensity. Seasonal variations of irregularity occurrence probability are significant in the Pacific regions, while the occurrence probability is always high in die Atlantic-African regions and is always low in die Indian regions. We find that the high occurrence probability in the Pacific regions is associated with isolated bubble structures, while that near 0 deg longitude is produced by large depictions with bubble structures which are superimposed on a large-scale wave-like background. Considerations of longitude variations due to seeding mechanisms and due to F region winds and drifts are necessary to adequately explain the observations at low and high altitudes. Seeding effects are most obvious near 0 deg longitude, while the most easily observed effect of the F region is the suppression of irregularity growth by interhemispheric neutral winds.
Wang, Jack T H; Daly, Joshua N; Willner, Dana L; Patil, Jayee; Hall, Roy A; Schembri, Mark A; Tyson, Gene W; Hugenholtz, Philip
2015-05-01
Clinical microbiology testing is crucial for the diagnosis and treatment of community and hospital-acquired infections. Laboratory scientists need to utilize technical and problem-solving skills to select from a wide array of microbial identification techniques. The inquiry-driven laboratory training required to prepare microbiology graduates for this professional environment can be difficult to replicate within undergraduate curricula, especially in courses that accommodate large student cohorts. We aimed to improve undergraduate scientific training by engaging hundreds of introductory microbiology students in an Authentic Large-Scale Undergraduate Research Experience (ALURE). The ALURE aimed to characterize the microorganisms that reside in the healthy human oral cavity-the oral microbiome-by analyzing hundreds of samples obtained from student volunteers within the course. Students were able to choose from selective and differential culture media, Gram-staining, microscopy, as well as polymerase chain reaction (PCR) and 16S rRNA gene sequencing techniques, in order to collect, analyze, and interpret novel data to determine the collective oral microbiome of the student cohort. Pre- and postsurvey analysis of student learning gains across two iterations of the course (2012-2013) revealed significantly higher student confidence in laboratory skills following the completion of the ALURE (p < 0.05 using the Mann-Whitney U-test). Learning objectives on effective scientific communication were also met through effective student performance in laboratory reports describing the research outcomes of the project. The integration of undergraduate research in clinical microbiology has the capacity to deliver authentic research experiences and improve scientific training for large cohorts of undergraduate students.
Visualization of the Eastern Renewable Generation Integration Study: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron
The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less
Multi-scale structures of turbulent magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, T. K. M., E-mail: takuma.nakamura@oeaw.ac.at; Nakamura, R.; Narita, Y.
2016-05-15
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in whichmore » modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.« less
Multi-scale structures of turbulent magnetic reconnection
NASA Astrophysics Data System (ADS)
Nakamura, T. K. M.; Nakamura, R.; Narita, Y.; Baumjohann, W.; Daughton, W.
2016-05-01
We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in which modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.
Can standard cosmological models explain the observed Abell cluster bulk flow?
NASA Technical Reports Server (NTRS)
Strauss, Michael A.; Cen, Renyue; Ostriker, Jeremiah P.; Laure, Tod R.; Postman, Marc
1995-01-01
Lauer and Postman (LP) observed that all Abell clusters with redshifts less than 15,000 km/s appear to be participating in a bulk flow of 689 km/s with respect to the cosmic microwave background. We find this result difficult to reconcile with all popular models for large-scale structure formation that assume Gaussian initial conditions. This conclusion is based on Monte Carlo realizations of the LP data, drawn from large particle-mesh N-body simulations for six different models of the initial power spectrum (standard, tilted, and Omega(sub 0) = 0.3 cold dark matter, and two variants of the primordial baryon isocurvature model). We have taken special care to treat properly the longest-wavelength components of the power spectra. The simulations are sampled, 'observed,' and analyzed as identically as possible to the LP cluster sample. Large-scale bulk flows as measured from clusters in the simulations are in excellent agreement with those measured from the grid: the clusters do not exhibit any strong velocity bias on large scales. Bulk flows with amplitude as large as that reported by LP are not uncommon in the Monte Carlo data stes; the distribution of measured bulk flows before error bias subtraction is rougly Maxwellian, with a peak around 400 km/s. However the chi squared of the observed bulk flow, taking into account the anisotropy of the error ellipsoid, is much more difficult to match in the simulations. The models examined are ruled out at confidence levels between 94% and 98%.
On optical imaging through aircraft turbulent boundary layers
NASA Technical Reports Server (NTRS)
Sutton, G. W.
1980-01-01
Optical resolution quality as affected by aircraft turbulent boundary layers is analyzed. Wind-tunnel data was analyzed to obtained the variation of boundary layer turbulence scale length and mass density rms fluctuations with Mach number. The data gave good agreement with a mass density fluctuation turbulence spectrum that is either isotropic of orthogonally anisotropic. The data did not match an isotropic turbulence velocity spectrum which causes an anisotropic non-orthogonal mass density fluctuation spectrum. The results indicate that the average mass density rms fluctuation is about 10% of the maximum mass density across the boundary layer and that the transverse turbulence scale size is about 10% of the boundary layer thickness. The results indicate that the effect of the turbulent boundary layer is large angle scattering which decreases contrast but not resolution. Using extinction as a criteria the range of acceptable aircraft operating conditions are given.
NASA Astrophysics Data System (ADS)
González López, J.; Jansen, K.; Renner, D. B.; Shindler, A.
2013-02-01
In a previous paper (González López, et al., 2013) [1], we have discussed the non-perturbative tuning of the chirally rotated Schrödinger functional (χSF). This tuning is required to eliminate bulk O(a) cutoff effects in physical correlation functions. Using our tuning results obtained in González López et al. (2013) [1] we perform scaling and universality tests analyzing the residual O(a) cutoff effects of several step-scaling functions and we compute renormalization factors at the matching scale. As an example of possible application of the χSF we compute the renormalized strange quark mass using large volume data obtained from Wilson twisted mass fermions at maximal twist.
Lunar terrain mapping and relative-roughness analysis
NASA Technical Reports Server (NTRS)
Rowan, L. C.; Mccauley, J. F.; Holm, E. A.
1971-01-01
Terrain maps of the equatorial zone were prepared at scales of 1:2,000,000 and 1:1,000,000 to classify lunar terrain with respect to roughness and to provide a basis for selecting sites for Surveyor and Apollo landings, as well as for Ranger and Lunar Orbiter photographs. Lunar terrain was described by qualitative and quantitative methods and divided into four fundamental classes: maria, terrae, craters, and linear features. Some 35 subdivisions were defined and mapped throughout the equatorial zone, and, in addition, most of the map units were illustrated by photographs. The terrain types were analyzed quantitatively to characterize and order their relative roughness characteristics. For some morphologically homogeneous mare areas, relative roughness can be extrapolated to the large scales from measurements at small scales.
Scaling laws in the dynamics of crime growth rate
NASA Astrophysics Data System (ADS)
Alves, Luiz G. A.; Ribeiro, Haroldo V.; Mendes, Renio S.
2013-06-01
The increasing number of crimes in areas with large concentrations of people have made cities one of the main sources of violence. Understanding characteristics of how crime rate expands and its relations with the cities size goes beyond an academic question, being a central issue for contemporary society. Here, we characterize and analyze quantitative aspects of murders in the period from 1980 to 2009 in Brazilian cities. We find that the distribution of the annual, biannual and triannual logarithmic homicide growth rates exhibit the same functional form for distinct scales, that is, a scale invariant behavior. We also identify asymptotic power-law decay relations between the standard deviations of these three growth rates and the initial size. Further, we discuss similarities with complex organizations.
Analysis on the restriction factors of the green building scale promotion based on DEMATEL
NASA Astrophysics Data System (ADS)
Wenxia, Hong; Zhenyao, Jiang; Zhao, Yang
2017-03-01
In order to promote the large-scale development of the green building in our country, DEMATEL method was used to classify influence factors of green building development into three parts, including green building market, green technology and macro economy. Through the DEMATEL model, the interaction mechanism of each part was analyzed. The mutual influence degree of each barrier factor that affects the green building promotion was quantitatively analysed and key factors for the development of green building in China were also finally determined. In addition, some implementation strategies of promoting green building scale development in our country were put forward. This research will show important reference value and practical value for making policies of the green building promotion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, A.; Han, T. Y.
Cuprous oxide is a p-type semiconducting material that has been highly researched for its interesting properties. Many small-scale syntheses have exhibited excellent control over size and morphology. As the demand for cuprous oxide grows, the synthesis method need to evolve to facilitate large-scale production. This paper supplies a facile bulk synthesis method for Cu₂O on average, 1-liter reaction volume can produce 1 gram of particles. In order to study the shape and size control mechanisms on such a scale, the reaction volume was diminished to 250 mL producing on average 0.3 grams of nanoparticles per batch. Well-shaped nanoparticles have beenmore » synthesized using an aqueous solution of CuCl₂, NaOH, SDS surfactant, and NH₂OH-HCl at mild temperatures. The time allotted between the addition of NaOH and NH₂OH-HCl was determined to be critical for Cu(OH)2 production, an important precursor to the final produce The effects of stirring rates on a large scale was also analyzed during reagent addition and post reagent addition. A morphological change from rhombic dodecahedra to spheres occurred as the stirring speed was increased. The effects of NH₂OH-HCl concentration were also studied to control the etching effects of the final product.« less
Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R
2015-11-03
Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.
TPS design for aerobraking at Earth and Mars
NASA Astrophysics Data System (ADS)
Williams, S. D.; Gietzel, M. M.; Rochelle, W. C.; Curry, D. M.
1991-08-01
An investigation was made to determine the feasibility of using an aerobrake system for manned and unmanned missions to Mars, and to Earth from Mars and lunar orbits. A preliminary thermal protection system (TPS) was examined for five unmanned small nose radius, straight bi-conic vehicles and a scaled up Aeroassist Flight Experiment (AFE) vehicle aerocapturing at Mars. Analyses were also conducted for the scaled up AFE and an unmanned Sample Return Cannister (SRC) returning from Mars and aerocapturing into Earth orbit. Also analyzed were three different classes of lunar transfer vehicles (LTV's): an expendable scaled up modified Apollo Command Module (CM), a raked cone (modified AFT), and three large nose radius domed cylinders. The LTV's would be used to transport personnel and supplies between Earth and the moon in order to establish a manned base on the lunar surface. The TPS for all vehicles analyzed is shown to have an advantage over an all-propulsive velocity reduction for orbit insertion. Results indicate that TPS weight penalties of less than 28 percent can be achieved using current material technology, and slightly less than the most favorable LTV using advanced material technology.
Li, Chunlin; Liu, Miao; Hu, Yuanman; Shi, Tuo; Zong, Min; Walter, M Todd
2018-04-17
Urbanization is one of the most widespread anthropogenic activities, which brings a range of physical and biochemical changes to hydrological system and processes. Increasing direct runoff caused by land use change has become a major challenge for urban ecological security. Reliable prediction of the quantity and rate of surface runoff is an inherently difficult and time-consuming task for large ungauged urban areas. In this study, we combined Geographic Information System and remote sensing technology with an improved Soil Conservation Service curve number model to evaluate the effects of land use change on direct runoff volume of the four-ring area in Shenyang, China, and analyzed trends of direct runoff at different scales. Through analyzing trends of direct runoff from 1984 to 2015 at different scales, we explored how urbanization and other potential factors affect direct runoff changes. Total direct runoff volume increased over time, and trends varied from the inner urban area to suburban area. Zones 1 and 2 had a tendency toward decreasing direct runoff volume and risks, while Zones 3 and 4 showed gradual increases at both regional and pixel scales. The most important influence on direct runoff change was urban surface change caused by urbanization. This study presents a framework for identifying hotspots of runoff increase, which can provide important guidance to urban managers in future green infrastructure planning, in the hopes of improving the security of urban water ecological patterns.
Li, Chunlin; Liu, Miao; Hu, Yuanman; Shi, Tuo; Zong, Min; Walter, M. Todd
2018-01-01
Urbanization is one of the most widespread anthropogenic activities, which brings a range of physical and biochemical changes to hydrological system and processes. Increasing direct runoff caused by land use change has become a major challenge for urban ecological security. Reliable prediction of the quantity and rate of surface runoff is an inherently difficult and time-consuming task for large ungauged urban areas. In this study, we combined Geographic Information System and remote sensing technology with an improved Soil Conservation Service curve number model to evaluate the effects of land use change on direct runoff volume of the four-ring area in Shenyang, China, and analyzed trends of direct runoff at different scales. Through analyzing trends of direct runoff from 1984 to 2015 at different scales, we explored how urbanization and other potential factors affect direct runoff changes. Total direct runoff volume increased over time, and trends varied from the inner urban area to suburban area. Zones 1 and 2 had a tendency toward decreasing direct runoff volume and risks, while Zones 3 and 4 showed gradual increases at both regional and pixel scales. The most important influence on direct runoff change was urban surface change caused by urbanization. This study presents a framework for identifying hotspots of runoff increase, which can provide important guidance to urban managers in future green infrastructure planning, in the hopes of improving the security of urban water ecological patterns. PMID:29673182
The Multi-Scale Network Landscape of Collaboration.
Bae, Arram; Park, Doheum; Ahn, Yong-Yeol; Park, Juyong
2016-01-01
Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena--which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists.
The Multi-Scale Network Landscape of Collaboration
Ahn, Yong-Yeol; Park, Juyong
2016-01-01
Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena—which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists. PMID:26990088
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to 'topdog'…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to combined…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents the computer printout of data on the application of second stage factor analysis of 'underdog'…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents findings from an analysis of data on international cooperation over a three-year period. Computer printout of the analysis is included. Part of a large scale research project to test various theories with regard to their ability to analyze international relations, this monograph reports on the testing of relative status…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis of combined…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis of 'underdog'…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international cooperation over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of discriminant function analysis to combined…
ERIC Educational Resources Information Center
Vincent, Jack E.
This monograph presents the computer printout of an analysis of data on international conflict over a three-year period. Part of a large scale research project to test various theories with regard to their power in analyzing international relations, this monograph presents data on the application of second stage factor analysis of combined…
Kevin M. Potter; Jeanine L. Paschke
2013-01-01
Analyzing patterns of forest pest infestations, diseases occurrences, forest declines and related biotic stress factors is necessary to monitor the health of forested ecosystems and their potential impacts on forest structure, composition, biodiversity, and species distributions (Castello and others 1995). Introduced nonnative insects and diseases, in particular, can...
Kevin M. Potter
2013-01-01
Analyzing patterns of forest pest infestation, disease occurrences, forest declines, and related biotic stress factors is necessary to monitor the health of forested ecosystems and their potential impacts on forest structure, composition, biodiversity, and species distributions (Castello and others 1995). Introduced nonnative insects and diseases, in particular, can...
Descriptor Fingerprints and Their Application to WhiteWine Clustering and Discrimination.
NASA Astrophysics Data System (ADS)
Bangov, I. P.; Moskovkina, M.; Stojanov, B. P.
2018-03-01
This study continues the attempt to use the statistical process for a large-scale analytical data. A group of 3898 white wines, each with 11 analytical laboratory benchmarks was analyzed by a fingerprint similarity search in order to be grouped into separate clusters. A characterization of the wine's quality in each individual cluster was carried out according to individual laboratory parameters.
Ten Years of Analyzing the Duck Chart: How an NREL Discovery in 2008 Is
examined how to plan for future large-scale integration of solar photovoltaic (PV) generation on the result, PV was deployed more widely, and system operators became increasingly concerned about how solar emerging energy and environmental policy initiatives pushing for higher levels of solar PV deployment. As a
Changes in downed wood and forest structure after prescribed fire in ponderosa pine forests
Victoria Saab; Lisa Bate; John Lehmkuhl; Brett Dickson; Scott Story; Stephanie Jentsch; William Block
2006-01-01
Most prescribed fire plans focus on reducing wildfire hazards with little consideration given to effects on wildlife populations and their habitats. To evaluate effectiveness of prescribed burning in reducing fuels and to assess effects of fuels reduction on wildlife, we began a large-scale study known as the Birds and Burns Network in 2002. In this paper we analyze...
Coarse woody type: A new method for analyzing coarse woody debris and forest change
C. W. Woodall; L. M. Nagel
2006-01-01
The species composition of both standing live and down dead trees has been used separately to determine forest stand dynamics in large-scale forest ecosystem assessments. The species composition of standing live trees has been used to indicate forest stand diversity while the species composition of down dead trees has been used to indicate wildlife habitat. To assess...
ERIC Educational Resources Information Center
Baumann, Paul R., Ed.
This teaching guide offers educators glimpses into the value of remote sensing, the process of observing and analyzing the earth from a distance. Remote sensing provides information in forms to see spatial patterns over large areas in a more realistic way than thematic maps and allows a macro-scale look at global problems. The six instructional…
ERIC Educational Resources Information Center
Jenkins, Davis
This paper analyzes the role the community college plays as a bridge to opportunity for the working poor and economically disadvantaged. Because educating the disadvantaged is expensive and often under-funded--particularly in the area of basic or remedial education--many community colleges opt to focus on educating more advantaged students in…
ERIC Educational Resources Information Center
Benediktsson, Michael Owen
2010-01-01
What role do the media play in the identification and construction of white-collar crimes? Few studies have examined media coverage of corporate deviance. This study investigates news coverage of six large-scale accounting scandals that broke in 2001 and 2002. Using a variety of empirical methods to analyze the 51 largest U.S. newspapers, the…
Probing large-scale magnetism with the cosmic microwave background
NASA Astrophysics Data System (ADS)
Giovannini, Massimo
2018-04-01
Prior to photon decoupling magnetic random fields of comoving intensity in the nano-Gauss range distort the temperature and the polarization anisotropies of the microwave background, potentially induce a peculiar B-mode power spectrum and may even generate a frequency-dependent circularly polarized V-mode. We critically analyze the theoretical foundations and the recent achievements of an interesting trialogue involving plasma physics, general relativity and astrophysics.
Brian S. Hughett; Wayne K. Clatterbuck
2014-01-01
Differences in composition, structure, and growth under canopy gaps created by the mortality of a single stem were analyzed using analysis of variance under two scenarios, with stem removed or with stem left as a standing snag. There were no significant differences in composition and structure of large diameter residual stems within upper canopy strata. Some...
Improving Design Efficiency for Large-Scale Heterogeneous Circuits
NASA Astrophysics Data System (ADS)
Gregerson, Anthony
Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
Large-Scale Ichthyoplankton and Water Mass Distribution along the South Brazil Shelf
de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique
2014-01-01
Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27′ and 34°51′S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients. PMID:24614798
High Fidelity Simulations of Large-Scale Wireless Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onunkwo, Uzoma; Benz, Zachary
The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulationsmore » (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.« less
Large-scale ichthyoplankton and water mass distribution along the South Brazil Shelf.
de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique
2014-01-01
Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27' and 34°51'S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients.
Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties
NASA Astrophysics Data System (ADS)
Li, Yongzhe; Vorobyov, Sergiy A.
2018-03-01
In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Jing; Xianyu, Zhong-Zhi; He, Hong-Jian, E-mail: jingren2004@gmail.com, E-mail: xianyuzhongzhi@gmail.com, E-mail: hjhe@tsinghua.edu.cn
2014-06-01
We study gravitational interaction of Higgs boson through the unique dimension-4 operator ξH{sup †}HR, with H the Higgs doublet and R the Ricci scalar curvature. We analyze the effect of this dimensionless nonminimal coupling ξ on weak gauge boson scattering in both Jordan and Einstein frames. We explicitly establish the longitudinal-Goldstone equivalence theorem with nonzero ξ coupling in both frames, and analyze the unitarity constraints. We study the ξ-induced weak boson scattering cross sections at O(1−30) TeV scales, and propose to probe the Higgs-gravity coupling via weak boson scattering experiments at the LHC (14 TeV) and the next generation ppmore » colliders (50-100 TeV). We further extend our study to Higgs inflation, and quantitatively derive the perturbative unitarity bounds via coupled channel analysis, under large field background at the inflation scale. We analyze the unitarity constraints on the parameter space in both the conventional Higgs inflation and the improved models in light of the recent BICEP2 data.« less
Statistical physics of community ecology: a cavity solution to MacArthur’s consumer resource model
NASA Astrophysics Data System (ADS)
Advani, Madhu; Bunin, Guy; Mehta, Pankaj
2018-03-01
A central question in ecology is to understand the ecological processes that shape community structure. Niche-based theories have emphasized the important role played by competition for maintaining species diversity. Many of these insights have been derived using MacArthur’s consumer resource model (MCRM) or its generalizations. Most theoretical work on the MCRM has focused on small ecosystems with a few species and resources. However theoretical insights derived from small ecosystems many not scale up to large ecosystems with many resources and species because large systems with many interacting components often display new emergent behaviors that cannot be understood or deduced from analyzing smaller systems. To address these shortcomings, we develop a statistical physics inspired cavity method to analyze MCRM when both the number of species and the number of resources is large. Unlike previous work in this limit, our theory addresses resource dynamics and resource depletion and demonstrates that species generically and consistently perturb their environments and significantly modify available ecological niches. We show how our cavity approach naturally generalizes niche theory to large ecosystems by accounting for the effect of collective phenomena on species invasion and ecological stability. Our theory suggests that such phenomena are a generic feature of large, natural ecosystems and must be taken into account when analyzing and interpreting community structure. It also highlights the important role that statistical-physics inspired approaches can play in furthering our understanding of ecology.
Inflationary magnetogenesis without the strong coupling problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira, Ricardo J.Z.; Jain, Rajeev Kumar; Sloth, Martin S., E-mail: ferreira@cp3.dias.sdu.dk, E-mail: jain@cp3.dias.sdu.dk, E-mail: sloth@cp3.dias.sdu.dk
2013-10-01
The simplest gauge invariant models of inflationary magnetogenesis are known to suffer from the problems of either large backreaction or strong coupling, which make it difficult to self-consistently achieve cosmic magnetic fields from inflation with a field strength larger than 10{sup −32}G today on the Mpc scale. Such a strength is insufficient to act as seed for the galactic dynamo effect, which requires a magnetic field larger than 10{sup −20}G. In this paper we analyze simple extensions of the minimal model, which avoid both the strong coupling and back reaction problems, in order to generate sufficiently large magnetic fields onmore » the Mpc scale today. First we study the possibility that the coupling function which breaks the conformal invariance of electromagnetism is non-monotonic with sharp features. Subsequently, we consider the effect of lowering the energy scale of inflation jointly with a scenario of prolonged reheating where the universe is dominated by a stiff fluid for a short period after inflation. In the latter case, a systematic study shows upper bounds for the magnetic field strength today on the Mpc scale of 10{sup −13}G for low scale inflation and 10{sup −25}G for high scale inflation, thus improving on the previous result by 7-19 orders of magnitude. These results are consistent with the strong coupling and backreaction constraints.« less
Cloud Microphysics Budget in the Tropical Deep Convective Regime
NASA Technical Reports Server (NTRS)
Li, Xiao-Fan; Sui, C.-H.; Lau, K.-M.; Einaudi, Franco (Technical Monitor)
2001-01-01
Cloud microphysics budgets in the tropical deep convective regime are analyzed based on a 2-D cloud resolving simulation. The model is forced by the large-scale vertical velocity and zonal wind and large-scale horizontal advections derived from TOGA COARE for a 20-day period. The role of cloud microphysics is first examined by analyzing mass-weighted mean heat budget and column-integrated moisture budget. Hourly budgets show that local changes of mass-weighted mean temperature and column-integrated moisture are mainly determined by the residuals between vertical thermal advection and latent heat of condensation and between vertical moisture advection and condensation respectively. Thus, atmospheric thermodynamics depends on how cloud microphysical processes are parameterized. Cloud microphysics budgets are then analyzed for raining conditions. For cloud-vapor exchange between cloud system and its embedded environment, rainfall and evaporation of raindrop are compensated by the condensation and deposition of supersaturated vapor. Inside the cloud system, the condensation of supersaturated vapor balances conversion from cloud water to raindrop, snow, and graupel through collection and accretion processes. The deposition of supersaturated vapor balances conversion from cloud ice to snow through conversion and riming processes. The conversion and riming of cloud ice and the accretion of cloud water balance conversion from snow to graupel through accretion process. Finally, the collection of cloud water and the melting of graupel increase raindrop to compensate the loss of raindrop due to rainfall and the evaporation of raindrop.
Posttest analysis of a 1:6-scale reinforced concrete reactor containment building
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weatherby, J.R.
In an experiment conducted at Sandia National Laboratories, 1:6-scale model of a reinforced concrete light water reactor containment building was pressurized with nitrogen gas to more than three times its design pressure. The pressurization produced one large tear and several smaller tears in the steel liner plate that functioned as the primary pneumatic seal for the structure. The data collected from the overpressurization test have been used to evaluate and further refine methods of structural analysis that can be used to predict the performance of containment buildings under conditions produced by a severe accident. This report describes posttest finite elementmore » analyses of the 1:6-scale model tests and compares pretest predictions of the structural response to the experimental results. Strain and displacements calculated in axisymmetric finite element analyses of the 1:6-scale model are compared to strains and displacement measured in the experiment. Detailed analyses of the liner plate are also described in the report. The region of the liner surrounding the large tear was analyzed using two different two-dimensional finite elements model. The results from these analyzed indicate that the primary mechanisms that initiated the tear can be captured in a two- dimensional finite element model. Furthermore, the analyses show that studs used to anchor the liner to the concrete wall, played an important role in initiating the liner tear. Three-dimensional finite element analyses of liner plates loaded by studs are also presented. Results from the three-dimensional analyses are compared to results from two-dimensional analyses of the same problems. 12 refs., 56 figs., 1 tab.« less
Energy transfer in turbulence under rotation
NASA Astrophysics Data System (ADS)
Buzzicotti, Michele; Aluie, Hussein; Biferale, Luca; Linkmann, Moritz
2018-03-01
It is known that rapidly rotating turbulent flows are characterized by the emergence of simultaneous upscale and downscale energy transfer. Indeed, both numerics and experiments show the formation of large-scale anisotropic vortices together with the development of small-scale dissipative structures. However the organization of interactions leading to this complex dynamics remains unclear. Two different mechanisms are known to be able to transfer energy upscale in a turbulent flow. The first is characterized by two-dimensional interactions among triads lying on the two-dimensional, three-component (2D3C)/slow manifold, namely on the Fourier plane perpendicular to the rotation axis. The second mechanism is three-dimensional and consists of interactions between triads with the same sign of helicity (homochiral). Here, we present a detailed numerical study of rotating flows using a suite of high-Reynolds-number direct numerical simulations (DNS) within different parameter regimes to analyze both upscale and downscale cascade ranges. We find that the upscale cascade at wave numbers close to the forcing scale is generated by increasingly dominant homochiral interactions which couple the three-dimensional bulk and the 2D3C plane. This coupling produces an accumulation of energy in the 2D3C plane, which then transfers energy to smaller wave numbers thanks to the two-dimensional mechanism. In the forward cascade range, we find that the energy transfer is dominated by heterochiral triads and is dominated primarily by interaction within the fast manifold where kz≠0 . We further analyze the energy transfer in different regions in the real-space domain. In particular, we distinguish high-strain from high-vorticity regions and we uncover that while the mean transfer is produced inside regions of strain, the rare but extreme events of energy transfer occur primarily inside the large-scale column vortices.
Submarine landslide identified in DLW3102 core of the northern continental slope, South China Sea
NASA Astrophysics Data System (ADS)
Xu, Yuanqin; Liu, Lejun; Zhou, Hang; Huang, Baoqi; Li, Ping; Ma, Xiudong; Dong, Feiyin
2018-02-01
In this paper, we take DLW3101 core obtained at the top of the canyon (no landslide area) and DLW3102 core obtained at the bottom of the canyon (landslide area) on the northern continental slope of the South China Sea as research objects. The chronostratigraphic framework of the DLW3101 core and elemental strata of the DLW3101 core and the DLW3102 core since MIS5 are established by analyzing oxygen isotope, calcium carbonate content, and X-Ray Fluorescence (XRF) scanning elements. On the basis of the information obtained by analyzing the sedimentary structure and chemical elements in the landslide deposition, we found that the DLW3102 core shows four layers of submarine landslides, and each landslide layer is characterized by high Si, K, Ti, and Fe contents, thereby indicating terrigenous clastic sources. L1 (2.15-2.44 m) occurred in MIS2, which is a slump sedimentary layer with a small sliding distance and scale. L2 (15.48-16.00 m) occurred in MIS5 and is a debris flow-deposited layer with a scale and sliding distance that are greater than those of L1. L3 (19.00-20.90 m) occurred in MIS5; its upper part (19.00-20.00 m) is a debris flow-deposited layer, and its lower part (20.00-20.90 m) is a sliding deposition layer. The landslide scale of L3 is large. L4 (22.93-24.27 m) occurred in MIS5; its upper part (22.93-23.50 m) is a turbid sedimentary layer, and its lower part (23.50-24.27 m) is a slump sedimentary layer. The landslide scale of L4 is large.
Cellular scaling rules for the brain of Artiodactyla include a highly folded cortex with few neurons
Kazu, Rodrigo S.; Maldonado, José; Mota, Bruno; Manger, Paul R.; Herculano-Houzel, Suzana
2014-01-01
Quantitative analysis of the cellular composition of rodent, primate, insectivore, and afrotherian brains has shown that non-neuronal scaling rules are similar across these mammalian orders that diverged about 95 million years ago, and therefore appear to be conserved in evolution, while neuronal scaling rules appear to be free to vary in a clade-specific manner. Here we analyze the cellular scaling rules that apply to the brain of artiodactyls, a group within the order Cetartiodactyla, believed to be a relatively recent radiation from the common Eutherian ancestor. We find that artiodactyls share non-neuronal scaling rules with all groups analyzed previously. Artiodactyls share with afrotherians and rodents, but not with primates, the neuronal scaling rules that apply to the cerebral cortex and cerebellum. The neuronal scaling rules that apply to the remaining brain areas are, however, distinct in artiodactyls. Importantly, we show that the folding index of the cerebral cortex scales with the number of neurons in the cerebral cortex in distinct fashions across artiodactyls, afrotherians, rodents, and primates, such that the artiodactyl cerebral cortex is more convoluted than primate cortices of similar numbers of neurons. Our findings suggest that the scaling rules found to be shared across modern afrotherians, glires, and artiodactyls applied to the common Eutherian ancestor, such as the relationship between the mass of the cerebral cortex as a whole and its number of neurons. In turn, the distribution of neurons along the surface of the cerebral cortex, which is related to its degree of gyrification, appears to be a clade-specific characteristic. If the neuronal scaling rules for artiodactyls extend to all cetartiodactyls, we predict that the large cerebral cortex of cetaceans will still have fewer neurons than the human cerebral cortex. PMID:25429261
Shaw, Jared B; Gorshkov, Mikhail V; Wu, Qinghao; Paša-Tolić, Ljiljana
2018-05-01
Mass spectrometric characterization of large biomolecules, such as intact proteins, requires the specificity afforded by ultrahigh resolution mass measurements performed at both the intact mass and product ion levels. Although the performance of time-of-flight mass analyzers is steadily increasing, the choice of mass analyzer for large biomolecules (e.g., proteins >50 kDa) is generally limited to the Fourier transform family of mass analyzers such as Orbitrap and ion cyclotron resonance (FTICR-MS), with the latter providing unmatched mass resolving power and measurement accuracy. Yet, protein analyses using FTMS are largely hindered by the low acquisition rates of spectra with ultrahigh resolving power. Frequency multiple detection schemes enable FTICR-MS to overcome this fundamental barrier and achieve resolving powers and acquisition speeds 4× greater than the limits imposed by magnetic field strength. Here we expand upon earlier work on the implementation of this technique for biomolecular characterization. We report the coupling of 21T FTICR-MS, 4X frequency multiplication, ion trapping field harmonization technology, and spectral data processing methods to achieve unprecedented acquisition rates and resolving power in mass spectrometry of large intact proteins. Isotopically resolved spectra of multiply charged ubiquitin ions were acquired using detection periods as short as 12 ms. Large proteins such as apo-transferrin (MW = 78 kDa) and monoclonal antibody (MW = 150 kDa) were isotopically resolved with detection periods of 384 and 768 ms, respectively. These results illustrate the future capability of accurate characterization of large proteins on time scales compatible with online separations.
Homogenization of a Directed Dispersal Model for Animal Movement in a Heterogeneous Environment.
Yurk, Brian P
2016-10-01
The dispersal patterns of animals moving through heterogeneous environments have important ecological and epidemiological consequences. In this work, we apply the method of homogenization to analyze an advection-diffusion (AD) model of directed movement in a one-dimensional environment in which the scale of the heterogeneity is small relative to the spatial scale of interest. We show that the large (slow) scale behavior is described by a constant-coefficient diffusion equation under certain assumptions about the fast-scale advection velocity, and we determine a formula for the slow-scale diffusion coefficient in terms of the fast-scale parameters. We extend the homogenization result to predict invasion speeds for an advection-diffusion-reaction (ADR) model with directed dispersal. For periodic environments, the homogenization approximation of the solution of the AD model compares favorably with numerical simulations. Invasion speed approximations for the ADR model also compare favorably with numerical simulations when the spatial period is sufficiently small.
Evolution of IPv6 Internet topology with unusual sudden changes
NASA Astrophysics Data System (ADS)
Ai, Jun; Zhao, Hai; Kathleen, M. Carley; Su, Zhan; Li, Hui
2013-07-01
The evolution of Internet topology is not always smooth but sometimes with unusual sudden changes. Consequently, identifying patterns of unusual topology evolution is critical for Internet topology modeling and simulation. We analyze IPv6 Internet topology evolution in IP-level graph to demonstrate how it changes in uncommon ways to restructure the Internet. After evaluating the changes of average degree, average path length, and some other metrics over time, we find that in the case of a large-scale growing the Internet becomes more robust; whereas in a top—bottom connection enhancement the Internet maintains its efficiency with links largely decreased.
Rosenberg, D; Marino, R; Herbert, C; Pouquet, A
2016-01-01
We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow.
Benefit-Cost Analysis of Foot-and-Mouth Disease Vaccination at the Farm-Level in South Vietnam.
Truong, Dinh Bao; Goutard, Flavie Luce; Bertagnoli, Stéphane; Delabouglise, Alexis; Grosbois, Vladimir; Peyre, Marisa
2018-01-01
This study aimed to analyze the financial impact of foot-and-mouth disease (FMD) outbreaks in cattle at the farm-level and the benefit-cost ratio (BCR) of biannual vaccination strategy to prevent and eradicate FMD for cattle in South Vietnam. Production data were collected from 49 small-scale dairy farms, 15 large-scale dairy farms, and 249 beef farms of Long An and Tay Ninh province using a questionaire. Financial data of FMD impacts were collected using participatory tools in 37 villages of Long An province. The net present value, i.e., the difference between the benefits (additional revenue and saved costs) and costs (additional costs and revenue foregone), of FMD vaccination in large-scale dairy farms was 2.8 times higher than in small-scale dairy farms and 20 times higher than in beef farms. The BCR of FMD vaccination over 1 year in large-scale dairy farms, small-scale dairy farms, and beef farms were 11.6 [95% confidence interval (95% CI) 6.42-16.45], 9.93 (95% CI 3.45-16.47), and 3.02 (95% CI 0.76-7.19), respectively. The sensitivity analysis showed that varying the vaccination cost had more effect on the BCR of cattle vaccination than varying the market price. This benefit-cost analysis of biannual vaccination strategy showed that investment in FMD prevention can be financially profitable, and therefore sustainable, for dairy farmers. For beef cattle, it is less certain that vaccination is profitable. Additional benefit-cost analysis study of vaccination strategies at the national-level would be required to evaluate and adapt the national strategy to achieve eradication of this disease in Vietnam.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
Background As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Methods Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Results Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Conclusions Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. PMID:24464852
Iino, Ryota; Matsumoto, Yoshimi; Nishino, Kunihiko; Yamaguchi, Akihito; Noji, Hiroyuki
2013-01-01
Single-cell analysis is a powerful method to assess the heterogeneity among individual cells, enabling the identification of very rare cells with properties that differ from those of the majority. In this Methods Article, we describe the use of a large-scale femtoliter droplet array to enclose, isolate, and analyze individual bacterial cells. As a first example, we describe the single-cell detection of drug-tolerant persisters of Pseudomonas aeruginosa treated with the antibiotic carbenicillin. As a second example, this method was applied to the single-cell evaluation of drug efflux activity, which causes acquired antibiotic resistance of bacteria. The activity of the MexAB-OprM multidrug efflux pump system from Pseudomonas aeruginosa was expressed in Escherichia coli and the effect of an inhibitor D13-9001 were assessed at the single cell level.
NASA Technical Reports Server (NTRS)
Gaskell, R. W.; Synnott, S. P.
1987-01-01
To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.
NASA Technical Reports Server (NTRS)
Hickey, David H.; Aoyagi, Kiyoshi
1960-01-01
A wind-tunnel investigation was conducted to determine the effect of trailing-edge flaps with blowing-type boundary-layer control and leading-edge slats on the low-speed performance of a large-scale jet transport model with four engines and a 35 deg. sweptback wing of aspect ratio 7. Two spanwise extents and several deflections of the trailing-edge flap were tested. Results were obtained with a normal leading-edge and with full-span leading-edge slats. Three-component longitudinal force and moment data and boundary-layer-control flow requirements are presented. The test results are analyzed in terms of possible improvements in low-speed performance. The effect on performance of the source of boundary-layer-control air flow is considered in the analysis.
Efficient Power Network Analysis with Modeling of Inductive Effects
NASA Astrophysics Data System (ADS)
Zeng, Shan; Yu, Wenjian; Hong, Xianlong; Cheng, Chung-Kuan
In this paper, an efficient method is proposed to accurately analyze large-scale power/ground (P/G) networks, where inductive parasitics are modeled with the partial reluctance. The method is based on frequency-domain circuit analysis and the technique of vector fitting [14], and obtains the time-domain voltage response at given P/G nodes. The frequency-domain circuit equation including partial reluctances is derived, and then solved with the GMRES algorithm with rescaling, preconditioning and recycling techniques. With the merit of sparsified reluctance matrix and iterative solving techniques for the frequency-domain circuit equations, the proposed method is able to handle large-scale P/G networks with complete inductive modeling. Numerical results show that the proposed method is orders of magnitude faster than HSPICE, several times faster than INDUCTWISE [4], and capable of handling the inductive P/G structures with more than 100, 000 wire segments.
NASA Astrophysics Data System (ADS)
Neklyudov, A. A.; Savenkov, V. N.; Sergeyez, A. G.
1984-06-01
Memories are improved by increasing speed or the memory volume on a single chip. The most effective means for increasing speeds in bipolar memories are current control circuits with the lowest extraction times for a specific power consumption (1/4 pJ/bit). The control current circuitry involves multistage current switches and circuits accelerating transient processes in storage elements and links. Circuit principles for the design of bipolar memories with maximum speeds for an assigned minimum of circuit topology are analyzed. Two main classes of storage with current control are considered: the ECL type and super-integrated injection type storage with data capacities of N = 1/4 and N 4/16, respectively. The circuits reduce logic voltage differentials and the volumes of lexical and discharge buses and control circuit buses. The limiting speed is determined by the antiinterference requirements of the memory in storage and extraction modes.
Distribution of Usutu Virus in Germany and Its Effect on Breeding Bird Populations
Jöst, Hanna; Cadar, Daniel; Thomas, Stephanie Margarete; Bosch, Stefan; Tannich, Egbert; Becker, Norbert; Ziegler, Ute; Lachmann, Lars; Schmidt-Chanasit, Jonas
2017-01-01
Usutu virus (USUV) is an emerging mosquitoborne flavivirus with an increasing number of reports from several countries in Europe, where USUV infection has caused high avian mortality rates. However, 20 years after the first observed outbreak of USUV in Europe, there is still no reliable assessment of the large-scale impact of USUV outbreaks on bird populations. In this study, we identified the areas suitable for USUV circulation in Germany and analyzed the effects of USUV on breeding bird populations. We calculated the USUV-associated additional decline of common blackbird (Turdus merula) populations as 15.7% inside USUV-suitable areas but found no significant effect for the other 14 common bird species investigated. Our results show that the emergence of USUV is a further threat for birds in Europe and that the large-scale impact on population levels, at least for common blackbirds, must be considered. PMID:29148399
Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B
2013-03-23
Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.
NASA Technical Reports Server (NTRS)
Stoll, F.; Koenig, D. G.
1983-01-01
Data obtained through very high angles of attack from a large-scale, subsonic wind-tunnel test of a close-coupled canard-delta-wing fighter model are analyzed. The canard delays wing leading-edge vortex breakdown, even for angles of attack at which the canard is completely stalled. A vortex-lattice method was applied which gave good predictions of lift and pitching moment up to an angle of attack of about 20 deg, where vortex-breakdown effects on performance become significant. Pitch-control inputs generally retain full effectiveness up to the angle of attack of maximum lift, beyond which, effectiveness drops off rapidly. A high-angle-of-attack prediction method gives good estimates of lift and drag for the completely stalled aircraft. Roll asymmetry observed at zero sideslip is apparently caused by an asymmetry in the model support structure.
Big Data Analytics for Genomic Medicine
He, Karen Y.; Ge, Dongliang; He, Max M.
2017-01-01
Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients’ genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs. PMID:28212287
Analysis of detection performance of multi band laser beam analyzer
NASA Astrophysics Data System (ADS)
Du, Baolin; Chen, Xiaomei; Hu, Leili
2017-10-01
Compared with microwave radar, Laser radar has high resolution, strong anti-interference ability and good hiding ability, so it becomes the focus of laser technology engineering application. A large scale Laser radar cross section (LRCS) measurement system is designed and experimentally tested. First, the boundary conditions are measured and the long range laser echo power is estimated according to the actual requirements. The estimation results show that the echo power is greater than the detector's response power. Secondly, a large scale LRCS measurement system is designed according to the demonstration and estimation. The system mainly consists of laser shaping, beam emitting device, laser echo receiving device and integrated control device. Finally, according to the designed lidar cross section measurement system, the scattering cross section of target is simulated and tested. The simulation results are basically the same as the test results, and the correctness of the system is proved.
NASA Astrophysics Data System (ADS)
Desai, Darshak A.; Kotadiya, Parth; Makwana, Nikheel; Patel, Sonalinkumar
2015-03-01
Indian industries need overall operational excellence for sustainable profitability and growth in the present age of global competitiveness. Among different quality and productivity improvement techniques, Six Sigma has emerged as one of the most effective breakthrough improvement strategies. Though Indian industries are exploring this improvement methodology to their advantage and reaping the benefits, not much has been presented and published regarding experience of Six Sigma in the food-processing industries. This paper is an effort to exemplify the application of Six Sigma quality improvement drive to one of the large-scale food-processing sectors in India. The paper discusses the phase wiz implementation of define, measure, analyze, improve, and control (DMAIC) on one of the chronic problems, variations in the weight of milk powder pouch. The paper wraps up with the improvements achieved and projected bottom-line gain to the unit by application of Six Sigma methodology.
Dynamical tuning for MPC using population games: A water supply network application.
Barreiro-Gomez, Julian; Ocampo-Martinez, Carlos; Quijano, Nicanor
2017-07-01
Model predictive control (MPC) is a suitable strategy for the control of large-scale systems that have multiple design requirements, e.g., multiple physical and operational constraints. Besides, an MPC controller is able to deal with multiple control objectives considering them within the cost function, which implies to determine a proper prioritization for each of the objectives. Furthermore, when the system has time-varying parameters and/or disturbances, the appropriate prioritization might vary along the time as well. This situation leads to the need of a dynamical tuning methodology. This paper addresses the dynamical tuning issue by using evolutionary game theory. The advantages of the proposed method are highlighted and tested over a large-scale water supply network with periodic time-varying disturbances. Finally, results are analyzed with respect to a multi-objective MPC controller that uses static tuning. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Multidimensional quantum entanglement with large-scale integrated optics.
Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G
2018-04-20
The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Toward a theoretical framework for trustworthy cyber sensing
NASA Astrophysics Data System (ADS)
Xu, Shouhuai
2010-04-01
Cyberspace is an indispensable part of the economy and society, but has been "polluted" with many compromised computers that can be abused to launch further attacks against the others. Since it is likely that there always are compromised computers, it is important to be aware of the (dynamic) cyber security-related situation, which is however challenging because cyberspace is an extremely large-scale complex system. Our project aims to investigate a theoretical framework for trustworthy cyber sensing. With the perspective of treating cyberspace as a large-scale complex system, the core question we aim to address is: What would be a competent theoretical (mathematical and algorithmic) framework for designing, analyzing, deploying, managing, and adapting cyber sensor systems so as to provide trustworthy information or input to the higher layer of cyber situation-awareness management, even in the presence of sophisticated malicious attacks against the cyber sensor systems?
Enhancing the transmission efficiency by edge deletion in scale-free networks
NASA Astrophysics Data System (ADS)
Zhang, Guo-Qing; Wang, Di; Li, Guo-Jie
2007-07-01
How to improve the transmission efficiency of Internet-like packet switching networks is one of the most important problems in complex networks as well as for the Internet research community. In this paper we propose a convenient method to enhance the transmission efficiency of scale-free networks dramatically by kicking out the edges linking to nodes with large betweenness, which we called the “black sheep.” The advantages of our method are of facility and practical importance. Since the black sheep edges are very costly due to their large bandwidth, our method could decrease the cost as well as gain higher throughput of networks. Moreover, we analyze the curve of the largest betweenness on deleting more and more black sheep edges and find that there is a sharp transition at the critical point where the average degree of the nodes ⟨k⟩→2 .
NASA Technical Reports Server (NTRS)
Campbell, W. J.; Josberger, E. G.; Gloersen, P.; Johannessen, O. M.; Guest, P. S.
1987-01-01
The data acquired during the summer 1984 Marginal Ice Zone Experiment in the Fram Strait-Greenland Sea marginal ice zone, using airborne active and passive microwave sensors and the Nimbus 7 SMMR, were analyzed to compile a sequential description of the mesoscale and large-scale ice morphology variations during the period of June 6 - July 16, 1984. Throughout the experiment, the long ice edge between northwest Svalbard and central Greenland meandered; eddies were repeatedly formed, moved, and disappeared but the ice edge remained within a 100-km-wide zone. The ice pack behind this alternately diffuse and compact edge underwent rapid and pronounced variations in ice concentration over a 200-km-wide zone. The high-resolution ice concentration distributions obtained in the aircraft images agree well with the low-resolution distributions of SMMR images.
Big Data Analytics for Genomic Medicine.
He, Karen Y; Ge, Dongliang; He, Max M
2017-02-15
Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients' genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs.
Exploring the Large Scale Anisotropy in the Cosmic Microwave Background Radiation at 170 GHz
NASA Astrophysics Data System (ADS)
Ganga, Kenneth Matthew
1994-01-01
In this thesis, data from the Far Infra-Red Survey (FIRS), a balloon-borne experiment designed to measure the large scale anisotropy in the cosmic microwave background radiation, are analyzed. The FIRS operates in four frequency bands at 170, 280, 480, and 670 GHz, using an approximately Gaussian beam with a 3.8 deg full-width-at-half-maximum. A cross-correlation with the COBE/DMR first-year maps yields significant results, confirming the DMR detection of anisotropy in the cosmic microwave background radiation. Analysis of the FIRS data alone sets bounds on the amplitude of anisotropy under the assumption that the fluctuations are described by a Harrison-Peebles-Zel'dovich spectrum and further analysis sets limits on the index of the primordial density fluctuations for an Einstein-DeSitter universe. Galactic dust emission is discussed and limits are set on the magnitude of possible systematic errors in the measurement.
Stochastic Fermi Energization of Coronal Plasma during Explosive Magnetic Energy Release
NASA Astrophysics Data System (ADS)
Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz; Tsiolis, Vassilis; Anastasiadis, Anastasios
2017-02-01
The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations (δB/B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points are acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker-Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker & Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path (λsc) of the particles between the scatterers inside the energization volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz
The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations ( δB / B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points aremore » acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker–Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker and Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path ( λ {sub sc}) of the particles between the scatterers inside the energization volume.« less
SPIKY: a graphical user interface for monitoring spike train synchrony
Mulansky, Mario; Bozanic, Nebojsa
2015-01-01
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels. PMID:25744888
SPIKY: a graphical user interface for monitoring spike train synchrony.
Kreuz, Thomas; Mulansky, Mario; Bozanic, Nebojsa
2015-05-01
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels. Copyright © 2015 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.
2017-12-01
With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also present some of our findings from applying machine learning and data analytics on the processed SAR data streams. We will also present lessons learned on how to ease the SAR community onto interfacing with these cloud-based SAR science data systems.
Fedy, B.C.; Doherty, K.E.
2011-01-01
Animal species across multiple taxa demonstrate multi-annual population cycles, which have long been of interest to ecologists. Correlated population cycles between species that do not share a predator-prey relationship are particularly intriguing and challenging to explain. We investigated annual population trends of greater sage-grouse (Centrocercus urophasianus) and cottontail rabbits (Sylvilagus sp.) across Wyoming to explore the possibility of correlations between unrelated species, over multiple cycles, very large spatial areas, and relatively southern latitudes in terms of cycling species. We analyzed sage-grouse lek counts and annual hunter harvest indices from 1982 to 2007. We show that greater sage-grouse, currently listed as warranted but precluded under the US Endangered Species Act, and cottontails have highly correlated cycles (r = 0. 77). We explore possible mechanistic hypotheses to explain the synchronous population cycles. Our research highlights the importance of control populations in both adaptive management and impact studies. Furthermore, we demonstrate the functional value of these indices (lek counts and hunter harvest) for tracking broad-scale fluctuations in the species. This level of highly correlated long-term cycling has not previously been documented between two non-related species, over a long time-series, very large spatial scale, and within more southern latitudes. ?? 2010 US Government.
Qi, Sen; Mitchell, Ross E
2012-01-01
The first large-scale, nationwide academic achievement testing program using Stanford Achievement Test (Stanford) for deaf and hard-of-hearing children in the United States started in 1969. Over the past three decades, the Stanford has served as a benchmark in the field of deaf education for assessing student academic achievement. However, the validity and reliability of using the Stanford for this special student population still require extensive scrutiny. Recent shifts in educational policy environment, which require that schools enable all children to achieve proficiency through accountability testing, warrants a close examination of the adequacy and relevance of the current large-scale testing of deaf and hard-of-hearing students. This study has three objectives: (a) it will summarize the historical data over the last three decades to indicate trends in academic achievement for this special population, (b) it will analyze the current federal laws and regulations related to educational testing and special education, thereby identifying gaps between policy and practice in the field, especially identifying the limitations of current testing programs in assessing what deaf and hard-of-hearing students know, and (c) it will offer some insights and suggestions for future testing programs for deaf and hard-of-hearing students.
NASA Technical Reports Server (NTRS)
Lutwack, R.
1974-01-01
A technical assessment of a program to develop photovoltaic power system technology for large-scale national energy applications was made by analyzing and judging the alternative candidate photovoltaic systems and development tasks. A program plan was constructed based on achieving the 10 year objective of a program to establish the practicability of large-scale terrestrial power installations using photovoltaic conversion arrays costing less than $0.50/peak W. Guidelines for the tasks of a 5 year program were derived from a set of 5 year objectives deduced from the 10 year objective. This report indicates the need for an early emphasis on the development of the single-crystal Si photovoltaic system for commercial utilization; a production goal of 5 x 10 to the 8th power peak W/year of $0.50 cells was projected for the year 1985. The developments of other photovoltaic conversion systems were assigned to longer range development roles. The status of the technology developments and the applicability of solar arrays in particular power installations, ranging from houses to central power plants, was scheduled to be verified in a series of demonstration projects. The budget recommended for the first 5 year phase of the program is $268.5M.