Sample records for large-scale multiple comparisons

  1. Multiple Score Comparison: a network meta-analysis approach to comparison and external validation of prognostic scores.

    PubMed

    Haile, Sarah R; Guerra, Beniamino; Soriano, Joan B; Puhan, Milo A

    2017-12-21

    Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC) which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD) arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties of clinical scores. Our large-scale external validation indicates that the scores with the best discriminative properties to predict 3 year mortality in patients with COPD are ADO and eBODE.

  2. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  3. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  4. Measures of Agreement Between Many Raters for Ordinal Classifications

    PubMed Central

    Nelson, Kerrie P.; Edwards, Don

    2015-01-01

    Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449

  5. Methods, caveats and the future of large-scale microelectrode recordings in the non-human primate

    PubMed Central

    Dotson, Nicholas M.; Goodell, Baldwin; Salazar, Rodrigo F.; Hoffman, Steven J.; Gray, Charles M.

    2015-01-01

    Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field. PMID:26578906

  6. Design Sketches For Optical Crossbar Switches Intended For Large-Scale Parallel Processing Applications

    NASA Astrophysics Data System (ADS)

    Hartmann, Alfred; Redfield, Steve

    1989-04-01

    This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.

  7. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  8. Improving the evaluation of therapeutic interventions in multiple sclerosis: the role of new psychometric methods.

    PubMed

    Hobart, J; Cano, S

    2009-02-01

    In this monograph we examine the added value of new psychometric methods (Rasch measurement and Item Response Theory) over traditional psychometric approaches by comparing and contrasting their psychometric evaluations of existing sets of rating scale data. We have concentrated on Rasch measurement rather than Item Response Theory because we believe that it is the more advantageous method for health measurement from a conceptual, theoretical and practical perspective. Our intention is to provide an authoritative document that describes the principles of Rasch measurement and the practice of Rasch analysis in a clear, detailed, non-technical form that is accurate and accessible to clinicians and researchers in health measurement. A comparison was undertaken of traditional and new psychometric methods in five large sets of rating scale data: (1) evaluation of the Rivermead Mobility Index (RMI) in data from 666 participants in the Cannabis in Multiple Sclerosis (CAMS) study; (2) evaluation of the Multiple Sclerosis Impact Scale (MSIS-29) in data from 1725 people with multiple sclerosis; (3) evaluation of test-retest reliability of MSIS-29 in data from 150 people with multiple sclerosis; (4) examination of the use of Rasch analysis to equate scales purporting to measure the same health construct in 585 people with multiple sclerosis; and (5) comparison of relative responsiveness of the Barthel Index and Functional Independence Measure in data from 1400 people undergoing neurorehabilitation. Both Rasch measurement and Item Response Theory are conceptually and theoretically superior to traditional psychometric methods. Findings from each of the five studies show that Rasch analysis is empirically superior to traditional psychometric methods for evaluating rating scales, developing rating scales, analysing rating scale data, understanding and measuring stability and change, and understanding the health constructs we seek to quantify. There is considerable added value in using Rasch analysis rather than traditional psychometric methods in health measurement. Future research directions include the need to reproduce our findings in a range of clinical populations, detailed head-to-head comparisons of Rasch analysis and Item Response Theory, and the application of Rasch analysis to clinical practice.

  9. Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories

    ERIC Educational Resources Information Center

    Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher

    2009-01-01

    Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…

  10. CoCoNUT: an efficient system for the comparison and analysis of genomes

    PubMed Central

    2008-01-01

    Background Comparative genomics is the analysis and comparison of genomes from different species. This area of research is driven by the large number of sequenced genomes and heavily relies on efficient algorithms and software to perform pairwise and multiple genome comparisons. Results Most of the software tools available are tailored for one specific task. In contrast, we have developed a novel system CoCoNUT (Computational Comparative geNomics Utility Toolkit) that allows solving several different tasks in a unified framework: (1) finding regions of high similarity among multiple genomic sequences and aligning them, (2) comparing two draft or multi-chromosomal genomes, (3) locating large segmental duplications in large genomic sequences, and (4) mapping cDNA/EST to genomic sequences. Conclusion CoCoNUT is competitive with other software tools w.r.t. the quality of the results. The use of state of the art algorithms and data structures allows CoCoNUT to solve comparative genomics tasks more efficiently than previous tools. With the improved user interface (including an interactive visualization component), CoCoNUT provides a unified, versatile, and easy-to-use software tool for large scale studies in comparative genomics. PMID:19014477

  11. Large Scale GW Calculations on the Cori System

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  12. Quantum internet using code division multiple access

    PubMed Central

    Zhang, Jing; Liu, Yu-xi; Özdemir, Şahin Kaya; Wu, Re-Bing; Gao, Feifei; Wang, Xiang-Bin; Yang, Lan; Nori, Franco

    2013-01-01

    A crucial open problem inS large-scale quantum networks is how to efficiently transmit quantum data among many pairs of users via a common data-transmission medium. We propose a solution by developing a quantum code division multiple access (q-CDMA) approach in which quantum information is chaotically encoded to spread its spectral content, and then decoded via chaos synchronization to separate different sender-receiver pairs. In comparison to other existing approaches, such as frequency division multiple access (FDMA), the proposed q-CDMA can greatly increase the information rates per channel used, especially for very noisy quantum channels. PMID:23860488

  13. Impact of tissue atrophy on high-pass filtered MRI signal phase-based assessment in large-scale group-comparison studies: A simulation study

    NASA Astrophysics Data System (ADS)

    Schweser, Ferdinand; Dwyer, Michael G.; Deistung, Andreas; Reichenbach, Jürgen R.; Zivadinov, Robert

    2013-10-01

    The assessment of abnormal accumulation of tissue iron in the basal ganglia nuclei and in white matter plaques using the gradient echo magnetic resonance signal phase has become a research focus in many neurodegenerative diseases such as multiple sclerosis or Parkinson’s disease. A common and natural approach is to calculate the mean high-pass-filtered phase of previously delineated brain structures. Unfortunately, the interpretation of such an analysis requires caution: in this paper we demonstrate that regional gray matter atrophy, which is concomitant with many neurodegenerative diseases, may itself directly result in a phase shift seemingly indicative of increased iron concentration even without any real change in the tissue iron concentration. Although this effect is relatively small results of large-scale group comparisons may be driven by anatomical changes rather than by changes of the iron concentration.

  14. Large-scale atomistic simulations demonstrate dominant alloy disorder effects in GaBixAs1 -x/GaAs multiple quantum wells

    NASA Astrophysics Data System (ADS)

    Usman, Muhammad

    2018-04-01

    Bismide semiconductor materials and heterostructures are considered a promising candidate for the design and implementation of photonic, thermoelectric, photovoltaic, and spintronic devices. This work presents a detailed theoretical study of the electronic and optical properties of strongly coupled GaBixAs1 -x /GaAs multiple quantum well (MQW) structures. Based on a systematic set of large-scale atomistic tight-binding calculations, our results reveal that the impact of atomic-scale fluctuations in alloy composition is stronger than the interwell coupling effect, and plays an important role in the electronic and optical properties of the investigated MQW structures. Independent of QW geometry parameters, alloy disorder leads to a strong confinement of charge carriers, a large broadening of the hole energies, and a red-shift in the ground-state transition wavelength. Polarization-resolved optical transition strengths exhibit a striking effect of disorder, where the inhomogeneous broadening could exceed an order of magnitude for MQWs, in comparison to a factor of about 3 for single QWs. The strong influence of alloy disorder effects persists when small variations in the size and composition of MQWs typically expected in a realistic experimental environment are considered. The presented results highlight the limited scope of continuum methods and emphasize on the need for large-scale atomistic approaches to design devices with tailored functionalities based on the novel properties of bismide materials.

  15. Data for Room Fire Model Comparisons

    PubMed Central

    Peacock, Richard D.; Davis, Sanford; Babrauskas, Vytenis

    1991-01-01

    With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system. PMID:28184121

  16. Data for Room Fire Model Comparisons.

    PubMed

    Peacock, Richard D; Davis, Sanford; Babrauskas, Vytenis

    1991-01-01

    With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system.

  17. A Comparison of Raw-to-Scale Conversion Consistency between Single- and Multiple-Linking Using a Nonequivalent Groups Anchor Test Design. Research Report. ETS RR-14-13

    ERIC Educational Resources Information Center

    Liu, Jinghua; Guo, Hongwen; Dorans, Neil J.

    2014-01-01

    Maintaining score interchangeability and scale consistency is crucial for any testing programs that administer multiple forms across years. The use of a multiple linking design, which involves equating a new form to multiple old forms and averaging the conversions, has been proposed to control scale drift. However, the use of multiple linking…

  18. Evolution of Precipitation Structure During the November DYNAMO MJO Event: Cloud-Resolving Model Intercomparison and Cross Validation Using Radar Observations

    NASA Astrophysics Data System (ADS)

    Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong

    2018-04-01

    Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.

  19. Apparently abnormal Wechsler Memory Scale index score patterns in the normal population.

    PubMed

    Carrasco, Roman Marcus; Grups, Josefine; Evans, Brittney; Simco, Edward; Mittenberg, Wiley

    2015-01-01

    Interpretation of the Wechsler Memory Scale-Fourth Edition may involve examination of multiple memory index score contrasts and similar comparisons with Wechsler Adult Intelligence Scale-Fourth Edition ability indexes. Standardization sample data suggest that 15-point differences between any specific pair of index scores are relatively uncommon in normal individuals, but these base rates refer to a comparison between a single pair of indexes rather than multiple simultaneous comparisons among indexes. This study provides normative data for the occurrence of multiple index score differences calculated by using Monte Carlo simulations and validated against standardization data. Differences of 15 points between any two memory indexes or between memory and ability indexes occurred in 60% and 48% of the normative sample, respectively. Wechsler index score discrepancies are normally common and therefore not clinically meaningful when numerous such comparisons are made. Explicit prior interpretive hypotheses are necessary to reduce the number of index comparisons and associated false-positive conclusions. Monte Carlo simulation accurately predicts these false-positive rates.

  20. Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.

    PubMed

    Gaertner, Jean-Claude; Maiorano, Porzia; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro

    2013-01-01

    Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.

  1. ProteinInferencer: Confident protein identification and multiple experiment comparison for large scale proteomics projects.

    PubMed

    Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R

    2015-11-03

    Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.

  2. Bose-Einstein correlations in pp and PbPb collisions with ALICE at the LHC

    ScienceCinema

    Kisiel, Adam

    2018-05-14

    We report on the results of identical pion femtoscopy at the LHC. The Bose-Einstein correlation analysis was performed on the large-statistics ALICE p+p at sqrt{s}= 0.9 TeV and 7 TeV datasets collected during 2010 LHC running and the first Pb+Pb dataset at sqrt{s_NN}= 2.76 TeV. Detailed pion femtoscopy studies in heavy-ion collisions have shown that emission region sizes ("HBT radii") decrease with increasing pair momentum, which is understood as a manifestation of the collective behavior of matter. 3D radii were also found to universally scale with event multiplicity. In p+p collisions at 7 TeV one measures multiplicities which are comparable with those registered in peripheral AuAu and CuCu collisions at RHIC, so direct comparisons and tests of scaling laws are now possible. We show the results of double-differential 3D pion HBT analysis, as a function of multiplicity and pair momentum. The results for two collision energies are compared to results obtained in the heavy-ion collisions at similar multiplicity and p+p collisions at lower energy. We identify the relevant scaling variables for the femtoscopic radii and discuss the similarities and differences to results from heavy-ions. The observed trends give insight into the soft particle production mechanism in p+p collisions and suggest that a self-interacting collective system may be created in sufficiently high multiplicity events. First results for the central Pb+Pb collisions are also shown. A significant increase of the reaction zone volume and lifetime in comparison to RHIC is observed. Signatures of collective hydrodynamics-like behavior of the system are also apparent, and are compared to model predictions.

  3. Influence of atmospheric transport on ozone and trace- level toxic air contaminants over the northeastern United States

    NASA Astrophysics Data System (ADS)

    Brankov, Elvira

    This thesis presents a methodology for examining the relationship between synoptic-scale atmospheric transport patterns and observed pollutant concentration levels. It involves calculating a large number of back-trajectories from the observational site and subjecting them to cluster analysis. The pollutant concentration data observed at that site are then segregated according to the back-trajectory clusters. If the pollutant observations extend over several seasons, it is important to filter out seasonal and long-term components from the time series data before pollutant cluster-segregation, because only the short-term component of the time series data is related to the synoptic-scale transport. Multiple comparison procedures are used to test for significant differences in the chemical composition of pollutant data associated with each cluster. This procedure is useful in indicating potential pollutant source regions and isolating meteorological regimes associated with pollutant transport from those regions. If many observational sites are available, the spatial and temporal scales of the pollution transport from a given direction can be extracted through the time-lagged inter- site correlation analysis of pollutant concentrations. The proposed methodology is applicable to any pollutant at any site if sufficiently abundant data set is available. This is illustrated through examination of five-year long time series data of ozone concentrations at several sites in the Northeast. The results provide evidence of ozone transport to these sites, revealing the characteristic spatial and temporal scales involved in the transport and identifying source regions for this pollutant. Problems related to statistical analyses of censored data are addressed in the second half of this thesis. Although censoring (reporting concentrations in a non-quantitative way) is typical for trace-level measurements, methods for statistical analysis, inference and interpretation of such data are complex and still under development. In this study, multiple comparison of censored data sets was required in order to examine the influence of synoptic- scale circulations on concentration levels of several trace-level toxic pollutants observed in the Northeast (e.g., As, Se, Mn, V, etc.). Since the traditional multiple comparison procedures are not readily applicable to such data sets, a Monte Carlo simulation study was performed to assess several nonparametric methods for multiple comparison of censored data sets. Application of an appropriate comparison procedure to clusters of toxic trace elements observed in the Northeast led to the identification of potential source regions and atmospheric patterns associated with the long-range transport of these pollutants. A method for comparison of proportions and elemental ratio calculations were used to confirm/clarify these inferences with a greater degree of confidence.

  4. A Comparison of Item-Level and Scale-Level Multiple Imputation for Questionnaire Batteries

    ERIC Educational Resources Information Center

    Gottschall, Amanda C.; West, Stephen G.; Enders, Craig K.

    2012-01-01

    Behavioral science researchers routinely use scale scores that sum or average a set of questionnaire items to address their substantive questions. A researcher applying multiple imputation to incomplete questionnaire data can either impute the incomplete items prior to computing scale scores or impute the scale scores directly from other scale…

  5. A Survey of Residents' Perceptions of the Effect of Large-Scale Economic Developments on Perceived Safety, Violence, and Economic Benefits.

    PubMed

    Fabio, Anthony; Geller, Ruth; Bazaco, Michael; Bear, Todd M; Foulds, Abigail L; Duell, Jessica; Sharma, Ravi

    2015-01-01

    Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates.

  6. Variability in large-scale wind power generation: Variability in large-scale wind power generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiviluoma, Juha; Holttinen, Hannele; Weir, David

    2015-10-25

    The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1more » h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.« less

  7. Total-dose radiation effects data for semiconductor devices, volume 3

    NASA Technical Reports Server (NTRS)

    Price, W. E.; Martin, K. E.; Nichols, D. K.; Gauthier, M. K.; Brown, S. F.

    1982-01-01

    Volume 3 of this three-volume set provides a detailed analysis of the data in Volumes 1 and 2, most of which was generated for the Galileo Orbiter Program in support of NASA space programs. Volume 1 includes total ionizing dose radiation test data on diodes, bipolar transistors, field effect transistors, and miscellaneous discrete solid-state devices. Volume 2 includes similar data on integrated circuits and a few large-scale integrated circuits. The data of Volumes 1 and 2 are combined in graphic format in Volume 3 to provide a comparison of radiation sensitivities of devices of a given type and different manufacturer, a comparison of multiple tests for a single data code, a comparison of multiple tests for a single lot, and a comparison of radiation sensitivities vs time (date codes). All data were generated using a steady-state 2.5-MeV electron source (Dynamitron) or a Cobalt-60 gamma ray source. The data that compose Volume 3 represent 26 different device types, 224 tests, and a total of 1040 devices. A comparison of the effects of steady-state electrons and Cobat-60 gamma rays is also presented.

  8. Trans-National Scale-Up of Services in Global Health

    PubMed Central

    Shahin, Ilan; Sohal, Raman; Ginther, John; Hayden, Leigh; MacDonald, John A.; Mossman, Kathryn; Parikh, Himanshu; McGahan, Anita; Mitchell, Will; Bhattacharyya, Onil

    2014-01-01

    Background Scaling up innovative healthcare programs offers a means to improve access, quality, and health equity across multiple health areas. Despite large numbers of promising projects, little is known about successful efforts to scale up. This study examines trans-national scale, whereby a program operates in two or more countries. Trans-national scale is a distinct measure that reflects opportunities to replicate healthcare programs in multiple countries, thereby providing services to broader populations. Methods Based on the Center for Health Market Innovations (CHMI) database of nearly 1200 health programs, the study contrasts 116 programs that have achieved trans-national scale with 1,068 single-country programs. Data was collected on the programs' health focus, service activity, legal status, and funding sources, as well as the programs' locations (rural v. urban emphasis), and founding year; differences are reported with statistical significance. Findings This analysis examines 116 programs that have achieved trans-national scale (TNS) across multiple disease areas and activity types. Compared to 1,068 single-country programs, we find that trans-nationally scaled programs are more donor-reliant; more likely to focus on targeted health needs such as HIV/AIDS, TB, malaria, or family planning rather than provide more comprehensive general care; and more likely to engage in activities that support healthcare services rather than provide direct clinical care. Conclusion This work, based on a large data set of health programs, reports on trans-national scale with comparison to single-country programs. The work is a step towards understanding when programs are able to replicate their services as they attempt to expand health services for the poor across countries and health areas. A subset of these programs should be the subject of case studies to understand factors that affect the scaling process, particularly seeking to identify mechanisms that lead to improved health outcomes. PMID:25375328

  9. Modal interactions between a large-wavelength inclined interface and small-wavelength multimode perturbations in a Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    McFarland, Jacob A.; Reilly, David; Black, Wolfgang; Greenough, Jeffrey A.; Ranjan, Devesh

    2015-07-01

    The interaction of a small-wavelength multimodal perturbation with a large-wavelength inclined interface perturbation is investigated for the reshocked Richtmyer-Meshkov instability using three-dimensional simulations. The ares code, developed at Lawrence Livermore National Laboratory, was used for these simulations and a detailed comparison of simulation results and experiments performed at the Georgia Tech Shock Tube facility is presented first for code validation. Simulation results are presented for four cases that vary in large-wavelength perturbation amplitude and the presence of secondary small-wavelength multimode perturbations. Previously developed measures of mixing and turbulence quantities are presented that highlight the large variation in perturbation length scales created by the inclined interface and the multimode complex perturbation. Measures are developed for entrainment, and turbulence anisotropy that help to identify the effects of and competition between each perturbations type. It is shown through multiple measures that before reshock the flow processes a distinct memory of the initial conditions that is present in both large-scale-driven entrainment measures and small-scale-driven mixing measures. After reshock the flow develops to a turbulentlike state that retains a memory of high-amplitude but not low-amplitude large-wavelength perturbations. It is also shown that the high-amplitude large-wavelength perturbation is capable of producing small-scale mixing and turbulent features similar to the small-wavelength multimode perturbations.

  10. Antimicrobial residues in animal waste and water resources proximal to large-scale swine and poultry feeding operations

    USGS Publications Warehouse

    Campagnolo, E.R.; Johnson, K.R.; Karpati, A.; Rubin, C.S.; Kolpin, D.W.; Meyer, M.T.; Esteban, J. Emilio; Currier, R.W.; Smith, K.; Thu, K.M.; McGeehin, M.

    2002-01-01

    Expansion and intensification of large-scale animal feeding operations (AFOs) in the United States has resulted in concern about environmental contamination and its potential public health impacts. The objective of this investigation was to obtain background data on a broad profile of antimicrobial residues in animal wastes and surface water and groundwater proximal to large-scale swine and poultry operations. The samples were measured for antimicrobial compounds using both radioimmunoassay and liquid chromatography/electrospray ionization-mass spectrometry (LC/ESI-MS) techniques. Multiple classes of antimicrobial compounds (commonly at concentrations of >100 μg/l) were detected in swine waste storage lagoons. In addition, multiple classes of antimicrobial compounds were detected in surface and groundwater samples collected proximal to the swine and poultry farms. This information indicates that animal waste used as fertilizer for crops may serve as a source of antimicrobial residues for the environment. Further research is required to determine if the levels of antimicrobials detected in this study are of consequence to human and/or environmental ecosystems. A comparison of the radioimmunoassay and LC/ESI-MS analytical methods documented that radioimmunoassay techniques were only appropriate for measuring residues in animal waste samples likely to contain high levels of antimicrobials. More sensitive LC/ESI-MS techniques are required in environmental samples, where low levels of antimicrobial residues are more likely.

  11. Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.

    NASA Astrophysics Data System (ADS)

    Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric

    2016-04-01

    SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.

  12. Legume genome evolution viewed through the Medicago truncatula and Lotus japonicus genomes

    PubMed Central

    Cannon, Steven B.; Sterck, Lieven; Rombauts, Stephane; Sato, Shusei; Cheung, Foo; Gouzy, Jérôme; Wang, Xiaohong; Mudge, Joann; Vasdewani, Jayprakash; Schiex, Thomas; Spannagl, Manuel; Monaghan, Erin; Nicholson, Christine; Humphray, Sean J.; Schoof, Heiko; Mayer, Klaus F. X.; Rogers, Jane; Quétier, Francis; Oldroyd, Giles E.; Debellé, Frédéric; Cook, Douglas R.; Retzel, Ernest F.; Roe, Bruce A.; Town, Christopher D.; Tabata, Satoshi; Van de Peer, Yves; Young, Nevin D.

    2006-01-01

    Genome sequencing of the model legumes, Medicago truncatula and Lotus japonicus, provides an opportunity for large-scale sequence-based comparison of two genomes in the same plant family. Here we report synteny comparisons between these species, including details about chromosome relationships, large-scale synteny blocks, microsynteny within blocks, and genome regions lacking clear correspondence. The Lotus and Medicago genomes share a minimum of 10 large-scale synteny blocks, each with substantial collinearity and frequently extending the length of whole chromosome arms. The proportion of genes syntenic and collinear within each synteny block is relatively homogeneous. Medicago–Lotus comparisons also indicate similar and largely homogeneous gene densities, although gene-containing regions in Mt occupy 20–30% more space than Lj counterparts, primarily because of larger numbers of Mt retrotransposons. Because the interpretation of genome comparisons is complicated by large-scale genome duplications, we describe synteny, synonymous substitutions and phylogenetic analyses to identify and date a probable whole-genome duplication event. There is no direct evidence for any recent large-scale genome duplication in either Medicago or Lotus but instead a duplication predating speciation. Phylogenetic comparisons place this duplication within the Rosid I clade, clearly after the split between legumes and Salicaceae (poplar). PMID:17003129

  13. Intercomparison of methods of coupling between convection and large-scale circulation. 1. Comparison over uniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2015-10-24

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  14. Opportunities and challenges in industrial plantation mapping in big data era

    NASA Astrophysics Data System (ADS)

    Dong, J.; Xiao, X.; Qin, Y.; Chen, B.; Wang, J.; Kou, W.; Zhai, D.

    2017-12-01

    With the increasing demand in timer, rubber, palm oil in the world market, industrial plantations have dramatically expanded, especially in Southeast Asia; which have been affecting ecosystem services and human wellbeing. However, existing efforts on plantation mapping are still limited and blocked our understanding about the magnitude of plantation expansion and their potential environmental effects. Here we would present a literature review about the existing efforts on plantation mapping based on one or multiple remote sensing sources, including rubber, oil palm, and eucalyptus plantations. The biophysical features and spectral characteristics of plantations will be introduced first, a comparison on existing algorithms in terms of different plantation types. Based on that, we proposed potential improvements in large scale plantation mapping based on the virtual constellation of multiple sensors, citizen science tools, and cloud computing technology. Based on the literature review, we discussed a series of issues for future scale operational paddy rice mapping.

  15. -A curated transcriptomic dataset collection relevant to embryonic development associated with in vitro fertilization in healthy individuals and patients with polycystic ovary syndrome.

    PubMed

    Mackeh, Rafah; Boughorbel, Sabri; Chaussabel, Damien; Kino, Tomoshige

    2017-01-01

    The collection of large-scale datasets available in public repositories is rapidly growing and providing opportunities to identify and fill gaps in different fields of biomedical research. However, users of these datasets should be able to selectively browse datasets related to their field of interest. Here we made available a collection of transcriptome datasets related to human follicular cells from normal individuals or patients with polycystic ovary syndrome, in the process of their development, during in vitro fertilization. After RNA-seq dataset exclusion and careful selection based on study description and sample information, 12 datasets, encompassing a total of 85 unique transcriptome profiles, were identified in NCBI Gene Expression Omnibus and uploaded to the Gene Expression Browser (GXB), a web application specifically designed for interactive query and visualization of integrated large-scale data. Once annotated in GXB, multiple sample grouping has been made in order to create rank lists to allow easy data interpretation and comparison. The GXB tool also allows the users to browse a single gene across multiple projects to evaluate its expression profiles in multiple biological systems/conditions in a web-based customized graphical views. The curated dataset is accessible at the following link: http://ivf.gxbsidra.org/dm3/landing.gsp.

  16. ­A curated transcriptomic dataset collection relevant to embryonic development associated with in vitro fertilization in healthy individuals and patients with polycystic ovary syndrome

    PubMed Central

    Mackeh, Rafah; Boughorbel, Sabri; Chaussabel, Damien; Kino, Tomoshige

    2017-01-01

    The collection of large-scale datasets available in public repositories is rapidly growing and providing opportunities to identify and fill gaps in different fields of biomedical research. However, users of these datasets should be able to selectively browse datasets related to their field of interest. Here we made available a collection of transcriptome datasets related to human follicular cells from normal individuals or patients with polycystic ovary syndrome, in the process of their development, during in vitro fertilization. After RNA-seq dataset exclusion and careful selection based on study description and sample information, 12 datasets, encompassing a total of 85 unique transcriptome profiles, were identified in NCBI Gene Expression Omnibus and uploaded to the Gene Expression Browser (GXB), a web application specifically designed for interactive query and visualization of integrated large-scale data. Once annotated in GXB, multiple sample grouping has been made in order to create rank lists to allow easy data interpretation and comparison. The GXB tool also allows the users to browse a single gene across multiple projects to evaluate its expression profiles in multiple biological systems/conditions in a web-based customized graphical views. The curated dataset is accessible at the following link: http://ivf.gxbsidra.org/dm3/landing.gsp. PMID:28413616

  17. Large-Scale Diversity of Slope Fishes: Pattern Inconsistency between Multiple Diversity Indices

    PubMed Central

    Gaertner, Jean-Claude; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A.; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro

    2013-01-01

    Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3′- 45°7′ N; 5°3′W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness. PMID:23843962

  18. A Survey of Residents' Perceptions of the Effect of Large-Scale Economic Developments on Perceived Safety, Violence, and Economic Benefits

    PubMed Central

    Geller, Ruth; Bear, Todd M.; Foulds, Abigail L.; Duell, Jessica; Sharma, Ravi

    2015-01-01

    Background. Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. Objective. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Methods. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Results. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Conclusions. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates. PMID:26273310

  19. Evaluating scaling models in biology using hierarchical Bayesian approaches

    PubMed Central

    Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S

    2009-01-01

    Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621

  20. A fracture criterion for widespread cracking in thin-sheet aluminum alloys

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; Dawicke, D. S.; Sutton, M. A.; Bigelow, C. A.

    1993-01-01

    An elastic-plastic finite-element analysis was used with a critical crack-tip-opening angle (CTOA) fracture criterion to model stable crack growth in thin-sheet 2024-T3 aluminum alloy panels with single and multiple-site damage (MSD) cracks. Comparisons were made between critical angles determined from the analyses and those measured with photographic methods. Calculated load against crack extension and load against crack-tip displacement on single crack specimens agreed well with test data even for large-scale plastic deformations. The analyses were also able to predict the stable tearing behavior of large lead cracks in the presence of stably tearing MSD cracks. Small MSD cracks significantly reduced the residual strength for large lead cracks.

  1. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    PubMed

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  2. Confronting weather and climate models with observational data from soil moisture networks over the United States

    PubMed Central

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal D.; Balsamo, Gianpaolo; Lawrence, David M.

    2018-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison. PMID:29645013

  3. Confronting Weather and Climate Models with Observational Data from Soil Moisture Networks over the United States

    NASA Technical Reports Server (NTRS)

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean; hide

    2016-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  4. Confronting weather and climate models with observational data from soil moisture networks over the United States.

    PubMed

    Dirmeyer, Paul A; Wu, Jiexia; Norton, Holly E; Dorigo, Wouter A; Quiring, Steven M; Ford, Trenton W; Santanello, Joseph A; Bosilovich, Michael G; Ek, Michael B; Koster, Randal D; Balsamo, Gianpaolo; Lawrence, David M

    2016-04-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  5. Comparison of Intelligibility Measures for Adults with Parkinson's Disease, Adults with Multiple Sclerosis, and Healthy Controls

    ERIC Educational Resources Information Center

    Stipancic, Kaila L.; Tjaden, Kris; Wilding, Gregory

    2016-01-01

    Purpose: This study obtained judgments of sentence intelligibility using orthographic transcription for comparison with previously reported intelligibility judgments obtained using a visual analog scale (VAS) for individuals with Parkinson's disease and multiple sclerosis and healthy controls (K. Tjaden, J. E. Sussman, & G. E. Wilding, 2014).…

  6. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  7. Efficient quantum transmission in multiple-source networks.

    PubMed

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-04-02

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency.

  8. Towards Productive Critique of Large-Scale Comparisons in Education

    ERIC Educational Resources Information Center

    Gorur, Radhika

    2017-01-01

    International large-scale assessments and comparisons (ILSAs) in education have become significant policy phenomena. How a country fares in these assessments has come to signify not only how a nation's education system is performing, but also its future prospects in a global economic "race". These assessments provoke passionate arguments…

  9. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  10. HASEonGPU-An adaptive, load-balanced MPI/GPU-code for calculating the amplified spontaneous emission in high power laser media

    NASA Astrophysics Data System (ADS)

    Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.

    2016-10-01

    We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.

  11. Wetlands as large-scale nature-based solutions: status and future challenges for research and management

    NASA Astrophysics Data System (ADS)

    Thorslund, Josefin; Jarsjö, Jerker; Destouni, Georgia

    2017-04-01

    Wetlands are often considered as nature-based solutions that can provide a multitude of services of great social, economic and environmental value to humankind. The services may include recreation, greenhouse gas sequestration, contaminant retention, coastal protection, groundwater level and soil moisture regulation, flood regulation and biodiversity support. Changes in land-use, water use and climate can all impact wetland functions and occur at scales extending well beyond the local scale of an individual wetland. However, in practical applications, management decisions usually regard and focus on individual wetland sites and local conditions. To understand the potential usefulness and services of wetlands as larger-scale nature-based solutions, e.g. for mitigating negative impacts from large-scale change pressures, one needs to understand the combined function multiple wetlands at the relevant large scales. We here systematically investigate if and to what extent research so far has addressed the large-scale dynamics of landscape systems with multiple wetlands, which are likely to be relevant for understanding impacts of regional to global change. Our investigation regards key changes and impacts of relevance for nature-based solutions, such as large-scale nutrient and pollution retention, flow regulation and coastal protection. Although such large-scale knowledge is still limited, evidence suggests that the aggregated functions and effects of multiple wetlands in the landscape can differ considerably from those observed at individual wetlands. Such scale differences may have important implications for wetland function-effect predictability and management under large-scale change pressures and impacts, such as those of climate change.

  12. Screening for depression in arthritis populations: an assessment of differential item functioning in three self-reported questionnaires.

    PubMed

    Hu, Jinxiang; Ward, Michael M

    2017-09-01

    To determine if persons with arthritis differ systematically from persons without arthritis in how they respond to questions on three depression questionnaires, which include somatic items such as fatigue and sleep disturbance. We extracted data on the Centers for Epidemiological Studies Depression (CES-D) scale, the Patient Health Questionnaire-9 (PHQ-9), and the Kessler-6 (K-6) scale from three large population-based national surveys. We assessed items on these questionnaires for differential item functioning (DIF) between persons with and without self-reported physician-diagnosed arthritis using multiple indicator multiple cause models, which controlled for the underlying level of depression and important confounders. We also examined if DIF by arthritis status was similar between women and men. Although five items of the CES-D, one item of the PHQ-9, and five items of the K-6 scale had evidence of DIF based on statistical comparisons, the magnitude of each difference was less than the threshold of a small effect. The statistical differences were a function of the very large sample sizes in the surveys. Effect sizes for DIF were similar between women and men except for two items on the Patient Health Questionnaire-9. For each questionnaire, DIF accounted for 8% or less of the arthritis-depression association, and excluding items with DIF did not reduce the difference in depression scores between those with and without arthritis. Persons with arthritis respond to items on the CES-D, PHQ-9, and K-6 depression scales similarly to persons without arthritis, despite the inclusion of somatic items in these scales.

  13. European Invasion of North American Pinus strobus at Large and Fine Scales: High Genetic Diversity and Fine-Scale Genetic Clustering over Time in the Adventive Range

    PubMed Central

    Mandák, Bohumil; Hadincová, Věroslava; Mahelka, Václav; Wildová, Radka

    2013-01-01

    Background North American Pinus strobus is a highly invasive tree species in Central Europe. Using ten polymorphic microsatellite loci we compared various aspects of the large-scale genetic diversity of individuals from 30 sites in the native distribution range with those from 30 sites in the European adventive distribution range. To investigate the ascertained pattern of genetic diversity of this intercontinental comparison further, we surveyed fine-scale genetic diversity patterns and changes over time within four highly invasive populations in the adventive range. Results Our data show that at the large scale the genetic diversity found within the relatively small adventive range in Central Europe, surprisingly, equals the diversity found within the sampled area in the native range, which is about thirty times larger. Bayesian assignment grouped individuals into two genetic clusters separating North American native populations from the European, non-native populations, without any strong genetic structure shown over either range. In the case of the fine scale, our comparison of genetic diversity parameters among the localities and age classes yielded no evidence of genetic diversity increase over time. We found that SGS differed across age classes within the populations under study. Old trees in general completely lacked any SGS, which increased over time and reached its maximum in the sapling stage. Conclusions Based on (1) the absence of difference in genetic diversity between the native and adventive ranges, together with the lack of structure in the native range, and (2) the lack of any evidence of any temporal increase in genetic diversity at four highly invasive populations in the adventive range, we conclude that population amalgamation probably first happened in the native range, prior to introduction. In such case, there would have been no need for multiple introductions from previously isolated populations, but only several introductions from genetically diverse populations. PMID:23874648

  14. Robust decentralized hybrid adaptive output feedback fuzzy control for a class of large-scale MIMO nonlinear systems and its application to AHS.

    PubMed

    Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu

    2014-09-01

    This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.

  15. Efficient Quantum Transmission in Multiple-Source Networks

    PubMed Central

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-01-01

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency. PMID:24691590

  16. hEIDI: An Intuitive Application Tool To Organize and Treat Large-Scale Proteomics Data.

    PubMed

    Hesse, Anne-Marie; Dupierris, Véronique; Adam, Claire; Court, Magali; Barthe, Damien; Emadali, Anouk; Masselon, Christophe; Ferro, Myriam; Bruley, Christophe

    2016-10-07

    Advances in high-throughput proteomics have led to a rapid increase in the number, size, and complexity of the associated data sets. Managing and extracting reliable information from such large series of data sets require the use of dedicated software organized in a consistent pipeline to reduce, validate, exploit, and ultimately export data. The compilation of multiple mass-spectrometry-based identification and quantification results obtained in the context of a large-scale project represents a real challenge for developers of bioinformatics solutions. In response to this challenge, we developed a dedicated software suite called hEIDI to manage and combine both identifications and semiquantitative data related to multiple LC-MS/MS analyses. This paper describes how, through a user-friendly interface, hEIDI can be used to compile analyses and retrieve lists of nonredundant protein groups. Moreover, hEIDI allows direct comparison of series of analyses, on the basis of protein groups, while ensuring consistent protein inference and also computing spectral counts. hEIDI ensures that validated results are compliant with MIAPE guidelines as all information related to samples and results is stored in appropriate databases. Thanks to the database structure, validated results generated within hEIDI can be easily exported in the PRIDE XML format for subsequent publication. hEIDI can be downloaded from http://biodev.extra.cea.fr/docs/heidi .

  17. Multiple sensor fault diagnosis for dynamic processes.

    PubMed

    Li, Cheng-Chih; Jeng, Jyh-Cheng

    2010-10-01

    Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  18. From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data

    NASA Astrophysics Data System (ADS)

    Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.

    2007-11-01

    Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.

  19. Demonstrating a new framework for the comparison of environmental impacts from small- and large-scale hydropower and wind power projects.

    PubMed

    Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi

    2014-07-01

    Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Effectiveness of Large-Scale Community-Based Intensive Behavioral Intervention: A Waitlist Comparison Study Exploring Outcomes and Predictors

    ERIC Educational Resources Information Center

    Flanagan, Helen E.; Perry, Adrienne; Freeman, Nancy L.

    2012-01-01

    File review data were used to explore the impact of a large-scale publicly funded Intensive Behavioral Intervention (IBI) program for young children with autism. Outcomes were compared for 61 children who received IBI and 61 individually matched children from a waitlist comparison group. In addition, predictors of better cognitive outcomes were…

  1. Forum: The Rise of International Large-Scale Assessments and Rationales for Participation

    ERIC Educational Resources Information Center

    Addey, Camilla; Sellar, Sam; Steiner-Khamsi, Gita; Lingard, Bob; Verger, Antoni

    2017-01-01

    This Forum discusses the significant growth of international large-scale assessments (ILSAs) since the mid-1990s. Addey and Sellar's contribution ("A Framework for Analysing the Multiple Rationales for Participating in International Large-Scale Assessments") outlines a framework of rationales for participating in ILSAs and examines the…

  2. Geospatial Optimization of Siting Large-Scale Solar Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macknick, Jordan; Quinby, Ted; Caulfield, Emmet

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent withmore » each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.« less

  3. The analysis of MAI in large scale MIMO-CDMA system

    NASA Astrophysics Data System (ADS)

    Berceanu, Madalina-Georgiana; Voicu, Carmen; Halunga, Simona

    2016-12-01

    Recently, technological development imposed a rapid growth in the use of data carried by cellular services, which also implies the necessity of higher data rates and lower latency. To meet the users' demands, it was brought into discussion a series of new data processing techniques. In this paper, we approached the MIMO technology that uses multiple antennas at the receiver and transmitter ends. To study the performances obtained by this technology, we proposed a MIMO-CDMA system, where image transmission has been used instead of random data transmission to take benefit of a larger range of quality indicators. In the simulations we increased the number of antennas, we observed how the performances of the system are modified and, based on that, we were able to make a comparison between a conventional MIMO and a Large Scale MIMO system, in terms of BER and MSSIM index, which is a metric that compares the quality of the image before transmission with the received one.

  4. A Comparison of the Sensitivity and Fecal Egg Counts of the McMaster Egg Counting and Kato-Katz Thick Smear Methods for Soil-Transmitted Helminths

    PubMed Central

    Levecke, Bruno; Behnke, Jerzy M.; Ajjampur, Sitara S. R.; Albonico, Marco; Ame, Shaali M.; Charlier, Johannes; Geiger, Stefan M.; Hoa, Nguyen T. V.; Kamwa Ngassam, Romuald I.; Kotze, Andrew C.; McCarthy, James S.; Montresor, Antonio; Periago, Maria V.; Roy, Sheela; Tchuem Tchuenté, Louis-Albert; Thach, D. T. C.; Vercruysse, Jozef

    2011-01-01

    Background The Kato-Katz thick smear (Kato-Katz) is the diagnostic method recommended for monitoring large-scale treatment programs implemented for the control of soil-transmitted helminths (STH) in public health, yet it is difficult to standardize. A promising alternative is the McMaster egg counting method (McMaster), commonly used in veterinary parasitology, but rarely so for the detection of STH in human stool. Methodology/Principal Findings The Kato-Katz and McMaster methods were compared for the detection of STH in 1,543 subjects resident in five countries across Africa, Asia and South America. The consistency of the performance of both methods in different trials, the validity of the fixed multiplication factor employed in the Kato-Katz method and the accuracy of these methods for estimating ‘true’ drug efficacies were assessed. The Kato-Katz method detected significantly more Ascaris lumbricoides infections (88.1% vs. 75.6%, p<0.001), whereas the difference in sensitivity between the two methods was non-significant for hookworm (78.3% vs. 72.4%) and Trichuris trichiura (82.6% vs. 80.3%). The sensitivity of the methods varied significantly across trials and magnitude of fecal egg counts (FEC). Quantitative comparison revealed a significant correlation (Rs >0.32) in FEC between both methods, and indicated no significant difference in FEC, except for A. lumbricoides, where the Kato-Katz resulted in significantly higher FEC (14,197 eggs per gram of stool (EPG) vs. 5,982 EPG). For the Kato-Katz, the fixed multiplication factor resulted in significantly higher FEC than the multiplication factor adjusted for mass of feces examined for A. lumbricoides (16,538 EPG vs. 15,396 EPG) and T. trichiura (1,490 EPG vs. 1,363 EPG), but not for hookworm. The McMaster provided more accurate efficacy results (absolute difference to ‘true’ drug efficacy: 1.7% vs. 4.5%). Conclusions/Significance The McMaster is an alternative method for monitoring large-scale treatment programs. It is a robust (accurate multiplication factor) and accurate (reliable efficacy results) method, which can be easily standardized. PMID:21695104

  5. Explorative Function in Williams Syndrome Analyzed through a Large-Scale Task with Multiple Rewards

    ERIC Educational Resources Information Center

    Foti, F.; Petrosini, L.; Cutuli, D.; Menghini, D.; Chiarotti, F.; Vicari, S.; Mandolesi, L.

    2011-01-01

    This study aimed to evaluate spatial function in subjects with Williams syndrome (WS) by using a large-scale task with multiple rewards and comparing the spatial abilities of WS subjects with those of mental age-matched control children. In the present spatial task, WS participants had to explore an open space to search nine rewards placed in…

  6. Comparison of H-alpha synoptic charts with the large-scale solar magnetic field as observed at Stanford

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Wilcox, J. M.; Svalgaard, L.; Scherrer, P. H.; Mcintosh, P. S.

    1977-01-01

    Two methods of observing the neutral line of the large-scale photospheric magnetic field are compared: neutral line positions inferred from H-alpha photographs (McIntosh and Nolte, 1975) and observations of the photospheric magnetic field made with low spatial resolution (three minutes) and high sensitivity using the Stanford magnetograph. The comparison is found to be very favorable.

  7. FEATURE 3, LARGE GUN POSITION, SHOWING MULTIPLE COMPARTMENTS, VIEW FACING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FEATURE 3, LARGE GUN POSITION, SHOWING MULTIPLE COMPARTMENTS, VIEW FACING SOUTH (with scale stick). - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-Large Gun Position, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI

  8. Comparative analysis and visualization of multiple collinear genomes

    PubMed Central

    2012-01-01

    Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897

  9. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  10. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  11. The first in situ observation of torsional Alfvén waves during the interaction of large-scale magnetic clouds

    NASA Astrophysics Data System (ADS)

    Raghav, Anil N.; Kule, Ankita

    2018-05-01

    The large-scale magnetic cloud such as coronal mass ejections (CMEs) is the fundamental driver of the space weather. The interaction of the multiple-CMEs in interplanetary space affects their dynamic evolution and geo-effectiveness. The complex and merged multiple magnetic clouds appear as the in situ signature of the interacting CMEs. The Alfvén waves are speculated to be one of the major possible energy exchange/dissipation mechanism during the interaction. However, no such observational evidence has been found in the literature. The case studies of CME-CME collision events suggest that the magnetic and thermal energy of the CME is converted into the kinetic energy. Moreover, magnetic reconnection process is justified to be responsible for merging of multiple magnetic clouds. Here, we present unambiguous evidence of sunward torsional Alfvén waves in the interacting region after the super-elastic collision of multiple CMEs. The Walén relation is used to confirm the presence of Alfvén waves in the interacting region of multiple CMEs/magnetic clouds. We conclude that Alfvén waves and magnetic reconnection are the possible energy exchange/dissipation mechanisms during large-scale magnetic clouds collisions. This study has significant implications not only in CME-magnetosphere interactions but also in the interstellar medium where interactions of large-scale magnetic clouds are possible.

  12. Framing Innovation: The Impact of the Superintendent's Technology Infrastructure Decisions on the Acceptance of Large-Scale Technology Initiatives

    ERIC Educational Resources Information Center

    Arnold, Erik P.

    2014-01-01

    A multiple-case qualitative study of five school districts that had implemented various large-scale technology initiatives was conducted to describe what superintendents do to gain acceptance of those initiatives. The large-scale technology initiatives in the five participating districts included 1:1 District-Provided Device laptop and tablet…

  13. Comparison of constitutive flow resistance equations based on the Manning and Chezy equations applied to natural rivers

    USGS Publications Warehouse

    Bjerklie, David M.; Dingman, S. Lawrence; Bolster, Carl H.

    2005-01-01

    A set of conceptually derived in‐bank river discharge–estimating equations (models), based on the Manning and Chezy equations, are calibrated and validated using a database of 1037 discharge measurements in 103 rivers in the United States and New Zealand. The models are compared to a multiple regression model derived from the same data. The comparison demonstrates that in natural rivers, using an exponent on the slope variable of 0.33 rather than the traditional value of 0.5 reduces the variance associated with estimating flow resistance. Mean model uncertainty, assuming a constant value for the conductance coefficient, is less than 5% for a large number of estimates, and 67% of the estimates would be accurate within 50%. The models have potential application where site‐specific flow resistance information is not available and can be the basis for (1) a general approach to estimating discharge from remotely sensed hydraulic data, (2) comparison to slope‐area discharge estimates, and (3) large‐scale river modeling.

  14. An overview of sensor calibration inter-comparison and applications

    USGS Publications Warehouse

    Xiong, Xiaoxiong; Cao, Changyong; Chander, Gyanesh

    2010-01-01

    Long-term climate data records (CDR) are often constructed using observations made by multiple Earth observing sensors over a broad range of spectra and a large scale in both time and space. These sensors can be of the same or different types operated on the same or different platforms. They can be developed and built with different technologies and are likely operated over different time spans. It has been known that the uncertainty of climate models and data records depends not only on the calibration quality (accuracy and stability) of individual sensors, but also on their calibration consistency across instruments and platforms. Therefore, sensor calibration inter-comparison and validation have become increasingly demanding and will continue to play an important role for a better understanding of the science product quality. This paper provides an overview of different methodologies, which have been successfully applied for sensor calibration inter-comparison. Specific examples using different sensors, including MODIS, AVHRR, and ETM+, are presented to illustrate the implementation of these methodologies.

  15. Electronic quenching of O({sup 1}D) by Xe: Oscillations in the product angular distribution and their dependence on collision energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garofalo, Lauren A.; Smith, Mica C.; Dagdigian, Paul J., E-mail: pjdagdigian@jhu.edu

    2015-08-07

    The dynamics of the O({sup 1}D) + Xe electronic quenching reaction was investigated in a crossed beam experiment at four collision energies. Marked large-scale oscillations in the differential cross sections were observed for the inelastic scattering products, O({sup 3}P) and Xe. The shape and relative phases of the oscillatory structure depend strongly on collision energy. Comparison of the experimental results with time-independent scattering calculations shows qualitatively that this behavior is caused by Stueckelberg interferences, for which the quantum phases of the multiple reaction pathways accessible during electronic quenching constructively and destructively interfere.

  16. Impact of playing American professional football on long-term brain function.

    PubMed

    Amen, Daniel G; Newberg, Andrew; Thatcher, Robert; Jin, Yi; Wu, Joseph; Keator, David; Willeumier, Kristen

    2011-01-01

    The authors recruited 100 active and former National Football League players, representing 27 teams and all positions. Players underwent a clinical history, brain SPECT imaging, qEEG, and multiple neuropsychological measures, including MicroCog. Relative to a healthy-comparison group, players showed global decreased perfusion, especially in the prefrontal, temporal, parietal, and occipital lobes, and cerebellar regions. Quantitative EEG findings were consistent, showing elevated slow waves in the frontal and temporal regions. Significant decreases from normal values were found in most neuropsychological tests. This is the first large-scale brain-imaging study to demonstrate significant differences consistent with a chronic brain trauma pattern in professional football players.

  17. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  18. Neighborhood scale quantification of ecosystem goods and ...

    EPA Pesticide Factsheets

    Ecosystem goods and services are those ecological structures and functions that humans can directly relate to their state of well-being. Ecosystem goods and services include, but are not limited to, a sufficient fresh water supply, fertile lands to produce agricultural products, shading, air and water of sufficient quality for designated uses, flood water retention, and places to recreate. The US Environmental Protection Agency (USEPA) Office of Research and Development’s Tampa Bay Ecosystem Services Demonstration Project (TBESDP) modeling efforts organized existing literature values for biophysical attributes and processes related to EGS. The goal was to develop a database for informing mapped-based EGS assessments for current and future land cover/use scenarios at multiple scales. This report serves as a demonstration of applying an EGS assessment approach at the large neighborhood scale (~1,000 acres of residential parcels plus common areas). Here, we present mapped inventories of ecosystem goods and services production at a neighborhood scale within the Tampa Bay, FL region. Comparisons of the inventory between two alternative neighborhood designs are presented as an example of how one might apply EGS concepts at this scale.

  19. Do large-scale assessments measure students' ability to integrate scientific knowledge?

    NASA Astrophysics Data System (ADS)

    Lee, Hee-Sun

    2010-03-01

    Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.

  20. ZnO-based multiple channel and multiple gate FinMOSFETs

    NASA Astrophysics Data System (ADS)

    Lee, Ching-Ting; Huang, Hung-Lin; Tseng, Chun-Yen; Lee, Hsin-Ying

    2016-02-01

    In recent years, zinc oxide (ZnO)-based metal-oxide-semiconductor field-effect transistors (MOSFETs) have attracted much attention, because ZnO-based semiconductors possess several advantages, including large exciton binding energy, nontoxicity, biocompatibility, low material cost, and wide direct bandgap. Moreover, the ZnO-based MOSFET is one of most potential devices, due to the applications in microwave power amplifiers, logic circuits, large scale integrated circuits, and logic swing. In this study, to enhance the performances of the ZnO-based MOSFETs, the ZnObased multiple channel and multiple gate structured FinMOSFETs were fabricated using the simple laser interference photolithography method and the self-aligned photolithography method. The multiple channel structure possessed the additional sidewall depletion width control ability to improve the channel controllability, because the multiple channel sidewall portions were surrounded by the gate electrode. Furthermore, the multiple gate structure had a shorter distance between source and gate and a shorter gate length between two gates to enhance the gate operating performances. Besides, the shorter distance between source and gate could enhance the electron velocity in the channel fin structure of the multiple gate structure. In this work, ninety one channels and four gates were used in the FinMOSFETs. Consequently, the drain-source saturation current (IDSS) and maximum transconductance (gm) of the ZnO-based multiple channel and multiple gate structured FinFETs operated at a drain-source voltage (VDS) of 10 V and a gate-source voltage (VGS) of 0 V were respectively improved from 11.5 mA/mm to 13.7 mA/mm and from 4.1 mS/mm to 6.9 mS/mm in comparison with that of the conventional ZnO-based single channel and single gate MOSFETs.

  1. Comparison of 2 ultrafiltration systems for the concentration of seeded viruses from environmental waters.

    PubMed

    Olszewski, John; Winona, Linda; Oshima, Kevin H

    2005-04-01

    The use of ultrafiltration as a concentration method to recover viruses from environmental waters was investigated. Two ultrafiltration systems (hollow fiber and tangential flow) in a large- (100 L) and small-scale (2 L) configuration were able to recover greater than 50% of multiple viruses (bacteriophage PP7 and T1 and poliovirus type 2) from varying water turbidities (10-157 nephelometric turbidity units (NTU)) simultaneously. Mean recoveries (n = 3) in ground and surface water by the large-scale hollow fiber ultrafiltration system (100 L) were comparable to recoveries observed in the small-scale system (2 L). Recovery of seeded viruses in highly turbid waters from small-scale tangential flow (2 L) (screen and open channel) and hollow fiber ultrafilters (2 L) (small pilot) were greater than 70%. Clogging occurred in the hollow fiber pencil module and when particulate concentrations exceeded 1.6 g/L and 5.5 g/L (dry mass) in the screen and open channel filters, respectively. The small pilot module was able to filter all concentrates without clogging. The small pilot hollow fiber ultrafilter was used to test recovery of seeded viruses from surface waters from different geographical regions in 10-L volumes. Recoveries >70% were observed from all locations.

  2. Multidimensional Multiphysics Simulation of TRISO Particle Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Hales; R. L. Williamson; S. R. Novascone

    2013-11-01

    Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less

  3. Context-dependence of long-term responses of terrestrial gastropod populations to large-scale disturbance.

    Treesearch

    Christopher P. Bloch; Michael R. Willi

    2006-01-01

    Large-scale natural disturbances, such as hurricanes, can have profound effects on animal populations. Nonetheless, generalizations about the effects of disturbance are elusive, and few studies consider long-term responses of a single population or community to multiple large-scale disturbance events. In the last 20 y, twomajor hurricanes (Hugo and Georges) have struck...

  4. Intercomparison of methods of coupling between convection and large-scale circulation: 2. Comparison over nonuniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2016-03-18

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  5. The Value of Large-Scale Randomised Control Trials in System-Wide Improvement: The Case of the Reading Catch-Up Programme

    ERIC Educational Resources Information Center

    Fleisch, Brahm; Taylor, Stephen; Schöer, Volker; Mabogoane, Thabo

    2017-01-01

    This article illustrates the value of large-scale impact evaluations with counterfactual components. It begins by exploring the limitations of small-scale impact studies, which do not allow reliable inference to a wider population or which do not use valid comparison groups. The paper then describes the design features of a recent large-scale…

  6. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Treesearch

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  7. Statistical analysis of mesoscale rainfall: Dependence of a random cascade generator on large-scale forcing

    NASA Technical Reports Server (NTRS)

    Over, Thomas, M.; Gupta, Vijay K.

    1994-01-01

    Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.

  8. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 3; Disaggregation

    NASA Technical Reports Server (NTRS)

    Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius

    1998-01-01

    This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.

  9. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  10. Comparison of the Physical Education and Sports School Students' Multiple Intelligence Areas According to Demographic Features

    ERIC Educational Resources Information Center

    Aslan, Cem Sinan

    2016-01-01

    The aim of this study is to compare the multiple intelligence areas of a group of physical education and sports students according to their demographic features. In the study, "Multiple Intelligence Scale", consisting of 27 items, whose Turkish validity and reliability study have been done by Babacan (2012) and which is originally owned…

  11. Planning and executing complex large-scale exercises.

    PubMed

    McCormick, Lisa C; Hites, Lisle; Wakelee, Jessica F; Rucks, Andrew C; Ginter, Peter M

    2014-01-01

    Increasingly, public health departments are designing and engaging in complex operations-based full-scale exercises to test multiple public health preparedness response functions. The Department of Homeland Security's Homeland Security Exercise and Evaluation Program (HSEEP) supplies benchmark guidelines that provide a framework for both the design and the evaluation of drills and exercises; however, the HSEEP framework does not seem to have been designed to manage the development and evaluation of multiple, operations-based, parallel exercises combined into 1 complex large-scale event. Lessons learned from the planning of the Mississippi State Department of Health Emergency Support Function--8 involvement in National Level Exercise 2011 were used to develop an expanded exercise planning model that is HSEEP compliant but accounts for increased exercise complexity and is more functional for public health. The Expanded HSEEP (E-HSEEP) model was developed through changes in the HSEEP exercise planning process in areas of Exercise Plan, Controller/Evaluator Handbook, Evaluation Plan, and After Action Report and Improvement Plan development. The E-HSEEP model was tested and refined during the planning and evaluation of Mississippi's State-level Emergency Support Function-8 exercises in 2012 and 2013. As a result of using the E-HSEEP model, Mississippi State Department of Health was able to capture strengths, lessons learned, and areas for improvement, and identify microlevel issues that may have been missed using the traditional HSEEP framework. The South Central Preparedness and Emergency Response Learning Center is working to create an Excel-based E-HSEEP tool that will allow practice partners to build a database to track corrective actions and conduct many different types of analyses and comparisons.

  12. A comparison of multiple behavior models in a simulation of the aftermath of an improvised nuclear detonation.

    PubMed

    Parikh, Nidhi; Hayatnagarkar, Harshal G; Beckman, Richard J; Marathe, Madhav V; Swarup, Samarth

    2016-11-01

    We describe a large-scale simulation of the aftermath of a hypothetical 10kT improvised nuclear detonation at ground level, near the White House in Washington DC. We take a synthetic information approach, where multiple data sets are combined to construct a synthesized representation of the population of the region with accurate demographics, as well as four infrastructures: transportation, healthcare, communication, and power. In this article, we focus on the model of agents and their behavior, which is represented using the options framework. Six different behavioral options are modeled: household reconstitution, evacuation, healthcare-seeking, worry, shelter-seeking, and aiding & assisting others. Agent decision-making takes into account their health status, information about family members, information about the event, and their local environment. We combine these behavioral options into five different behavior models of increasing complexity and do a number of simulations to compare the models.

  13. A comparison of multiple behavior models in a simulation of the aftermath of an improvised nuclear detonation

    PubMed Central

    Parikh, Nidhi; Hayatnagarkar, Harshal G.; Beckman, Richard J.; Marathe, Madhav V.; Swarup, Samarth

    2016-01-01

    We describe a large-scale simulation of the aftermath of a hypothetical 10kT improvised nuclear detonation at ground level, near the White House in Washington DC. We take a synthetic information approach, where multiple data sets are combined to construct a synthesized representation of the population of the region with accurate demographics, as well as four infrastructures: transportation, healthcare, communication, and power. In this article, we focus on the model of agents and their behavior, which is represented using the options framework. Six different behavioral options are modeled: household reconstitution, evacuation, healthcare-seeking, worry, shelter-seeking, and aiding & assisting others. Agent decision-making takes into account their health status, information about family members, information about the event, and their local environment. We combine these behavioral options into five different behavior models of increasing complexity and do a number of simulations to compare the models. PMID:27909393

  14. Near-Earth Object Interception Using Nuclear Thermal Rock Propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    X-L. Zhang; E. Ball; L. Kochmanski

    Planetary defense has drawn wide study: despite the low probability of a large-scale impact, its consequences would be disastrous. The study presented here evaluates available protection strategies to identify bottlenecks limiting the scale of near-Earth object that could be deflected, using cutting-edge and near-future technologies. It discusses the use of a nuclear thermal rocket (NTR) as a propulsion device for delivery of thermonuclear payloads to deflect or destroy a long-period comet on a collision course with Earth. A ‘worst plausible scenario’ for the available warning time (10 months) and comet approach trajectory are determined, and empirical data are used tomore » make an estimate of the payload necessary to deflect such a comet. Optimizing the tradeoff between early interception and large deflection payload establishes the ideal trajectory for an interception mission to follow. The study also examines the potential for multiple rocket launch dates. Comparison of propulsion technologies for this mission shows that NTR outperforms other options substantially. The discussion concludes with an estimate of the comet size (5 km) that could be deflected usingNTRpropulsion, given current launch capabilities.« less

  15. Large-scale monitoring of shorebird populations using count data and N-mixture models: Black Oystercatcher (Haematopus bachmani) surveys by land and sea

    USGS Publications Warehouse

    Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.

    2012-01-01

    Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.

  16. Load Balancing Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, Olga Tkachyshyn

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less

  17. Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat.

    PubMed

    Schjetnan, Andrea Gomez Palacio; Luczak, Artur

    2011-10-19

    Large scale electrophysiological recordings from neuronal ensembles offer the opportunity to investigate how the brain orchestrates the wide variety of behaviors from the spiking activity of its neurons. One of the most effective methods to monitor spiking activity from a large number of neurons in multiple local neuronal circuits simultaneously is by using silicon electrode arrays. Action potentials produce large transmembrane voltage changes in the vicinity of cell somata. These output signals can be measured by placing a conductor in close proximity of a neuron. If there are many active (spiking) neurons in the vicinity of the tip, the electrode records combined signal from all of them, where contribution of a single neuron is weighted by its 'electrical distance'. Silicon probes are ideal recording electrodes to monitor multiple neurons because of a large number of recording sites (+64) and a small volume. Furthermore, multiple sites can be arranged over a distance of millimeters, thus allowing for the simultaneous recordings of neuronal activity in the various cortical layers or in multiple cortical columns (Fig. 1). Importantly, the geometrically precise distribution of the recording sites also allows for the determination of the spatial relationship of the isolated single neurons. Here, we describe an acute, large-scale neuronal recording from the left and right forelimb somatosensory cortex simultaneously in an anesthetized rat with silicon probes (Fig. 2).

  18. Recording Large-scale Neuronal Ensembles with Silicon Probes in the Anesthetized Rat

    PubMed Central

    Schjetnan, Andrea Gomez Palacio; Luczak, Artur

    2011-01-01

    Large scale electrophysiological recordings from neuronal ensembles offer the opportunity to investigate how the brain orchestrates the wide variety of behaviors from the spiking activity of its neurons. One of the most effective methods to monitor spiking activity from a large number of neurons in multiple local neuronal circuits simultaneously is by using silicon electrode arrays1-3. Action potentials produce large transmembrane voltage changes in the vicinity of cell somata. These output signals can be measured by placing a conductor in close proximity of a neuron. If there are many active (spiking) neurons in the vicinity of the tip, the electrode records combined signal from all of them, where contribution of a single neuron is weighted by its 'electrical distance'. Silicon probes are ideal recording electrodes to monitor multiple neurons because of a large number of recording sites (+64) and a small volume. Furthermore, multiple sites can be arranged over a distance of millimeters, thus allowing for the simultaneous recordings of neuronal activity in the various cortical layers or in multiple cortical columns (Fig. 1). Importantly, the geometrically precise distribution of the recording sites also allows for the determination of the spatial relationship of the isolated single neurons4. Here, we describe an acute, large-scale neuronal recording from the left and right forelimb somatosensory cortex simultaneously in an anesthetized rat with silicon probes (Fig. 2). PMID:22042361

  19. Artificial selection for structural color on butterfly wings and comparison with natural evolution.

    PubMed

    Wasik, Bethany R; Liew, Seng Fatt; Lilien, David A; Dinwiddie, April J; Noh, Heeso; Cao, Hui; Monteiro, Antónia

    2014-08-19

    Brilliant animal colors often are produced from light interacting with intricate nano-morphologies present in biological materials such as butterfly wing scales. Surveys across widely divergent butterfly species have identified multiple mechanisms of structural color production; however, little is known about how these colors evolved. Here, we examine how closely related species and populations of Bicyclus butterflies have evolved violet structural color from brown-pigmented ancestors with UV structural color. We used artificial selection on a laboratory model butterfly, B. anynana, to evolve violet scales from UV brown scales and compared the mechanism of violet color production with that of two other Bicyclus species, Bicyclus sambulos and Bicyclus medontias, which have evolved violet/blue scales independently via natural selection. The UV reflectance peak of B. anynana brown scales shifted to violet over six generations of artificial selection (i.e., in less than 1 y) as the result of an increase in the thickness of the lower lamina in ground scales. Similar scale structures and the same mechanism for producing violet/blue structural colors were found in the other Bicyclus species. This work shows that populations harbor large amounts of standing genetic variation that can lead to rapid evolution of scales' structural color via slight modifications to the scales' physical dimensions.

  20. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE

  1. Mining the Mind Research Network: A Novel Framework for Exploring Large Scale, Heterogeneous Translational Neuroscience Research Data Sources

    PubMed Central

    Bockholt, Henry J.; Scully, Mark; Courtney, William; Rachakonda, Srinivas; Scott, Adam; Caprihan, Arvind; Fries, Jill; Kalyanam, Ravi; Segall, Judith M.; de la Garza, Raul; Lane, Susan; Calhoun, Vince D.

    2009-01-01

    A neuroinformatics (NI) system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN), database system has been designed and improved through our experience with 200 research studies and 250 researchers from seven different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining. PMID:20461147

  2. Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset

    NASA Astrophysics Data System (ADS)

    Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.

    2017-12-01

    Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.

  3. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  4. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    NASA Technical Reports Server (NTRS)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  5. Evaluation of selected recurrence measures in discriminating pre-ictal and inter-ictal periods from epileptic EEG data

    NASA Astrophysics Data System (ADS)

    Ngamga, Eulalie Joelle; Bialonski, Stephan; Marwan, Norbert; Kurths, Jürgen; Geier, Christian; Lehnertz, Klaus

    2016-04-01

    We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database.

  6. A geographic comparison of selected large-scale planetary surface features

    NASA Technical Reports Server (NTRS)

    Meszaros, S. P.

    1984-01-01

    Photographic and cartographic comparisons of geographic features on Mercury, the Moon, Earth, Mars, Ganymede, Callisto, Mimas, and Tethys are presented. Planetary structures caused by impacts, volcanism, tectonics, and other natural forces are included. Each feature is discussed individually and then those of similar origin are compared at the same scale.

  7. Nonequilibrium scheme for computing the flux of the convection-diffusion equation in the framework of the lattice Boltzmann method.

    PubMed

    Chai, Zhenhua; Zhao, T S

    2014-07-01

    In this paper, we propose a local nonequilibrium scheme for computing the flux of the convection-diffusion equation with a source term in the framework of the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM). Both the Chapman-Enskog analysis and the numerical results show that, at the diffusive scaling, the present nonequilibrium scheme has a second-order convergence rate in space. A comparison between the nonequilibrium scheme and the conventional second-order central-difference scheme indicates that, although both schemes have a second-order convergence rate in space, the present nonequilibrium scheme is more accurate than the central-difference scheme. In addition, the flux computation rendered by the present scheme also preserves the parallel computation feature of the LBM, making the scheme more efficient than conventional finite-difference schemes in the study of large-scale problems. Finally, a comparison between the single-relaxation-time model and the MRT model is also conducted, and the results show that the MRT model is more accurate than the single-relaxation-time model, both in solving the convection-diffusion equation and in computing the flux.

  8. Feedback to Managers: A Review and Comparison of Multi-Rater Instruments for Management Development. Third Edition.

    ERIC Educational Resources Information Center

    Leslie, Jean Brittain; Fleenor, John W.

    This volume describes 24 publicly available multiple-perspective management-assessment instruments that relate self-view to the views of others on multiple management and leadership domains. Each instrument also includes an assessment-for-development focus that scales managers along a continuum of psychometric properties, and "best…

  9. Photometry of icy satellites: How important is multiple scattering in diluting shadows?

    NASA Technical Reports Server (NTRS)

    Buratti, B.; Veverka, J.

    1984-01-01

    Voyager observations have shown that the photometric properties of icy satellites are influenced significantly by large-scale roughness elements on the surfaces. While recent progress was made in treating the photometric effects of macroscopic roughness, it is still the case that even the most complete models do not account for the effects of multiple scattering fully. Multiple scattering dilutes shadows caused by large-scale features, yet for any specific model it is difficult to calculate the amount of dilution as a function of albedo. Accordingly, laboratory measurements were undertaken using the Cornell Goniometer to evaluate the magnitude of the effect.

  10. Using Model Comparisons to Understand Sources of Nitrogen Delivered to US Coastal Areas

    NASA Astrophysics Data System (ADS)

    McCrackin, M. L.; Harrison, J.; Compton, J. E.

    2011-12-01

    Nitrogen loading to water bodies can result in eutrophication-related hypoxia and degraded water quality. The relative contributions of different anthropogenic and natural sources of in-stream N cannot be directly measured at whole-watershed scales; hence, N source attribution estimates at scales beyond a small catchment must rely on models. Although such estimates have been accomplished using individual N loading models, there has not yet been a comparison of source attribution by multiple regional- and continental-scale models. We compared results from two models applied at large spatial scales: Nutrient Export from WatershedS (NEWS) and SPAtially Referenced Regressions On Watersheds (SPARROW). Despite widely divergent approaches to source attribution, NEWS and SPARROW identified the same dominant sources of N for 65% of the modeled drainage area of the continental US. Human activities accounted for over two-thirds of N delivered to the coastal zone. Regionally, the single largest sources of N predicted by both models reflect land-use patterns across the country. Sewage was an important source in densely populated regions along the east and west coasts of the US. Fertilizer and livestock manure were dominant in the Mississippi River Basin, where the bulk of agricultural areas are located. Run-off from undeveloped areas was the largest source of N delivered to coastal areas in the northwestern US. Our analysis shows that comparisons of source apportionment between models can increase confidence in modeled output by revealing areas of agreement and disagreement. We found predictions for agriculture and atmospheric deposition to be comparable between models; however, attribution to sewage was greater by SPARROW than by NEWS, while the reverse was true for natural N sources. Such differences in predictions resulted from differences in model structure and sources of input data. Nonetheless, model comparisons provide strong evidence that anthropogenic activities have a profound effect on N delivered to coastal areas of the US, especially along the Atlantic coast and Gulf of Mexico.

  11. A Study of Two Dwarf Irregular Galaxies with Asymmetrical Star Formation Distributions

    NASA Astrophysics Data System (ADS)

    Hunter, Deidre A.; Gallardo, Samavarti; Zhang, Hong-Xin; Adamo, Angela; Cook, David O.; Oh, Se-Heon; Elmegreen, Bruce G.; Kim, Hwihyun; Kahre, Lauren; Ubeda, Leonardo; Bright, Stacey N.; Ryon, Jenna E.; Fumagalli, Michele; Sacchi, Elena; Kennicutt, R. C.; Tosi, Monica; Dale, Daniel A.; Cignoni, Michele; Messa, Matteo; Grebel, Eva K.; Gouliermis, Dimitrios A.; Sabbi, Elena; Grasha, Kathryn; Gallagher, John S., III; Calzetti, Daniela; Lee, Janice C.

    2018-03-01

    Two dwarf irregular galaxies, DDO 187 and NGC 3738, exhibit a striking pattern of star formation: intense star formation is taking place in a large region occupying roughly half of the inner part of the optical galaxy. We use data on the H I distribution and kinematics and stellar images and colors to examine the properties of the environment in the high star formation rate (HSF) halves of the galaxies in comparison with the low star formation rate halves. We find that the pressure and gas density are higher on the HSF sides by 30%–70%. In addition we find in both galaxies that the H I velocity fields exhibit significant deviations from ordered rotation and there are large regions of high-velocity dispersion and multiple velocity components in the gas beyond the inner regions of the galaxies. The conditions in the HSF regions are likely the result of large-scale external processes affecting the internal environment of the galaxies and enabling the current star formation there.

  12. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  13. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  14. Prediction of Broadband Shock-Associated Noise Including Propagation Effects Originating NASA

    NASA Technical Reports Server (NTRS)

    Miller, Steven; Morris, Philip J.

    2012-01-01

    An acoustic analogy is developed based on the Euler equations for broadband shock-associated noise (BBSAN) that directly incorporates the vector Green s function of the linearized Euler equations and a steady Reynolds-Averaged Navier-Stokes solution (SRANS) to describe the mean flow. The vector Green s function allows the BBSAN propagation through the jet shear layer to be determined. The large-scale coherent turbulence is modeled by two-point second order velocity cross-correlations. Turbulent length and time scales are related to the turbulent kinetic energy and dissipation rate. An adjoint vector Green s function solver is implemented to determine the vector Green s function based on a locally parallel mean flow at different streamwise locations. The newly developed acoustic analogy can be simplified to one that uses the Green s function associated with the Helmholtz equation, which is consistent with a previous formulation by the authors. A large number of predictions are generated using three different nozzles over a wide range of fully-expanded jet Mach numbers and jet stagnation temperatures. These predictions are compared with experimental data from multiple jet noise experimental facilities. In addition, two models for the so-called fine-scale mixing noise are included in the comparisons. Improved BBSAN predictions are obtained relative to other models that do not include propagation effects.

  15. Validity and reliability of the multidimensional assessment of fatigue scale in Iranian patients with relapsing-remitting subtype of multiple sclerosis.

    PubMed

    Behrangrad, Shabnam; Kordi Yoosefinejad, Amin

    2018-03-01

    The purpose of this study is to investigate the validity and reliability of the Persian version of the Multidimensional Assessment of Fatigue Scale (MAFS) in an Iranian population with multiple sclerosis. A self-reported survey on fatigue including the MAFS, Fatigue Impact Scale and demographic measures was completed by 130 patients with multiple sclerosis and 60 healthy persons sampled with a convenience method. Test-retest reliability and validity were evaluated 3 days apart. Construct validity of the MAFS was assessed with the Fatigue Impact Scale. The MAFS had high internal consistency (Cronbach's alpha >0.9) and 3-d test-retest reliability (intraclass correlation coefficient = 0.99). Correlation between the Fatigue Impact Scale and MAFS was high (r = 0.99). Correlation between MAFS scores and the Expanded Disability Status Scale was also strong (r = 0.85). Questionnaire items showed acceptable item-scale correlation (0.968-0.993). The Persian version of the MAFS appears to be a valid and reliable questionnaire. It is an appropriate short multidimensional instrument to assess fatigue in patients with multiple sclerosis in clinical practice and research. Implications for Rehabilitation The Persian version of Multidimensional Assessment of Fatigue is a valid and reliable instrument for the assessment and monitoring the fatigue in Persian-language patients with multiple sclerosis. It is very easy to administer and a time efficient scale in comparison to other instruments evaluating fatigue in patients with multiple sclerosis.

  16. Comparative Tectonics of Europa and Ganymede

    NASA Astrophysics Data System (ADS)

    Pappalardo, R. T.; Collins, G. C.; Prockter, L. M.; Head, J. W.

    2000-10-01

    Europa and Ganymede are sibling satellites with tectonic similarities and differences. Ganymede's ancient dark terrain is crossed by furrows, probably related to ancient large impacts, and has been normal faulted to various degrees. Bright grooved is pervasively deformed at multiple scales and is locally highly strained, consistent with normal faulting of an ice-rich lithosphere above a ductile asthenosphere, along with minor horizontal shear. Little evidence has been identified for compressional structures. The relative roles of tectonism and icy cryovolcanism in creating bright grooved terrain is an outstanding issue. Some ridge and trough structures within Europa's bands show tectonic similarities to Ganymede's grooved terrain, specifically sawtooth structures resembling normal fault blocks. Small-scale troughs are consistent with widened tension fractures. Shearing has produced transtensional and transpressional structures in Europan bands. Large-scale folds are recognized on Europa, with synclinal small-scale ridges and scarps probably representing folds and/or thrust blocks. Europa's ubiquitous double ridges may have originated as warm ice upwelled along tidally heated fracture zones. The morphological variety of ridges and troughs on Europa imply that care must be taken in inferring their origin. The relative youth of Europa's surface means that the satellite has preserved near-pristine morphologies of many structures, though sputter erosion could have altered the morphology of older topography. Moderate-resolution imaging has revealed lesser apparent diversity in Ganymede's ridge and trough types. Galileo's 28th orbit has brought new 20 m/pixel imaging of Ganymede, allowing direct comparison to Europa's small-scale structures.

  17. Comparing Effects of Lake- and Watershed-Scale Influences on Communities of Aquatic Invertebrates in Shallow Lakes

    PubMed Central

    Hanson, Mark A.; Herwig, Brian R.; Zimmer, Kyle D.; Fieberg, John; Vaughn, Sean R.; Wright, Robert G.; Younk, Jerry A.

    2012-01-01

    Constraints on lake communities are complex and are usually studied by using limited combinations of variables derived from measurements within or adjacent to study waters. While informative, results often provide limited insight about magnitude of simultaneous influences operating at multiple scales, such as lake- vs. watershed-scale. To formulate comparisons of such contrasting influences, we explored factors controlling the abundance of predominant aquatic invertebrates in 75 shallow lakes in western Minnesota, USA. Using robust regression techniques, we modeled relative abundance of Amphipoda, small and large cladocera, Corixidae, aquatic Diptera, and an aggregate taxon that combined Ephemeroptera-Trichoptera-Odonata (ETO) in response to lake- and watershed-scale characteristics. Predictor variables included fish and submerged plant abundance, linear distance to the nearest wetland or lake, watershed size, and proportion of the watershed in agricultural production. Among-lake variability in invertebrate abundance was more often explained by lake-scale predictors than by variables based on watershed characteristics. For example, we identified significant associations between fish presence and community type and abundance of small and large cladocera, Amphipoda, Diptera, and ETO. Abundance of Amphipoda, Diptera, and Corixidae were also positively correlated with submerged plant abundance. We observed no associations between lake-watershed variables and abundance of our invertebrate taxa. Broadly, our results seem to indicate preeminence of lake-level influences on aquatic invertebrates in shallow lakes, but historical land-use legacies may mask important relationships. PMID:22970275

  18. Functional genomic Landscape of Human Breast Cancer drivers, vulnerabilities, and resistance

    PubMed Central

    Marcotte, Richard; Sayad, Azin; Brown, Kevin R.; Sanchez-Garcia, Felix; Reimand, Jüri; Haider, Maliha; Virtanen, Carl; Bradner, James E.; Bader, Gary D.; Mills, Gordon B.; Pe’er, Dana; Moffat, Jason; Neel, Benjamin G.

    2016-01-01

    Summary Large-scale genomic studies have identified multiple somatic aberrations in breast cancer, including copy number alterations, and point mutations. Still, identifying causal variants and emergent vulnerabilities that arise as a consequence of genetic alterations remain major challenges. We performed whole genome shRNA “dropout screens” on 77 breast cancer cell lines. Using a hierarchical linear regression algorithm to score our screen results and integrate them with accompanying detailed genetic and proteomic information, we identify vulnerabilities in breast cancer, including candidate “drivers,” and reveal general functional genomic properties of cancer cells. Comparisons of gene essentiality with drug sensitivity data suggest potential resistance mechanisms, effects of existing anti-cancer drugs, and opportunities for combination therapy. Finally, we demonstrate the utility of this large dataset by identifying BRD4 as a potential target in luminal breast cancer, and PIK3CA mutations as a resistance determinant for BET-inhibitors. PMID:26771497

  19. Artificial selection for structural color on butterfly wings and comparison with natural evolution

    PubMed Central

    Wasik, Bethany R.; Liew, Seng Fatt; Lilien, David A.; Dinwiddie, April J.; Noh, Heeso; Cao, Hui; Monteiro, Antónia

    2014-01-01

    Brilliant animal colors often are produced from light interacting with intricate nano-morphologies present in biological materials such as butterfly wing scales. Surveys across widely divergent butterfly species have identified multiple mechanisms of structural color production; however, little is known about how these colors evolved. Here, we examine how closely related species and populations of Bicyclus butterflies have evolved violet structural color from brown-pigmented ancestors with UV structural color. We used artificial selection on a laboratory model butterfly, B. anynana, to evolve violet scales from UV brown scales and compared the mechanism of violet color production with that of two other Bicyclus species, Bicyclus sambulos and Bicyclus medontias, which have evolved violet/blue scales independently via natural selection. The UV reflectance peak of B. anynana brown scales shifted to violet over six generations of artificial selection (i.e., in less than 1 y) as the result of an increase in the thickness of the lower lamina in ground scales. Similar scale structures and the same mechanism for producing violet/blue structural colors were found in the other Bicyclus species. This work shows that populations harbor large amounts of standing genetic variation that can lead to rapid evolution of scales’ structural color via slight modifications to the scales’ physical dimensions. PMID:25092295

  20. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  1. From drones to ASO: Using 'Structure-From-Motion' photogrammetry to quantify variations in snow depth at multiple scales

    NASA Astrophysics Data System (ADS)

    Skiles, M.

    2017-12-01

    The ability to accurately measure and manage the natural snow water reservoir in mountainous regions has its challenges, namely mapping of snowpack depth and snow water equivalent (SWE). Presented here is a scalable method that differentially maps snow depth using Structure from Motion (SfM); a photogrammetric technique that uses 2d images to create a 3D model/Digital Surface Model (DSM). There are challenges with applying SfM to snow, namely, relatively uniform snow brightness can make it difficult to produce quality images needed for processing, and vegetation can limit the ability to `see' through the canopy to map both the ground and snow beneath. New techniques implemented in the method to adapt to these challenges will be demonstrated. Results include a time series at (1) the plot scale, imaged with an unmanned areal vehicle (DJI Phantom 2 adapted with Sony A5100) over the Utah Department of Transportation Atwater Study Plot in Little Cottonwood Canyon, UT, and at (2) the mountain watershed scale, imaged from the RGB camera aboard the Airborne Snow Observatory (ASO), over the headwaters of the Uncompahgre River in the San Juan Mountains, CO. At the plot scale we present comparisons to measured snow depth, and at the watershed scale we present comparisons to the ASO lidar DSM. This method is of interest due to its low cost relative to lidar, making it an accessible tool for snow research and the management of water resources. With advancing unmanned aerial vehicle technology there are implications for scalability to map snow depth, and SWE, across large basins.

  2. [Additional psychometric data for the DS1K mood questionnaire. Experience from a large sample study involving parents of young children].

    PubMed

    Danis, Ildiko; Scheuring, Noemi; Papp, Eszter; Czinner, Antal

    2012-06-01

    A new instrument for assessing depressive mood, the first version of Depression Scale Questionnaire (DS1K) was published in 2008 by Halmai et al. This scale was used in our large sample study, in the framework of the For Healthy Offspring project, involving parents of young children. The original questionnaire was developed in small samples, so our aim was to assist further development of the instrument by the psychometric analysis of the data in our large sample (n=1164). The DS1K scale was chosen to measure the parents' mood and mental state in the For Healthy Offspring project. The questionnaire was completed by 1063 mothers and 328 fathers, yielding a heterogenous sample with respect to age and socio-demographic status. Analyses included main descriptive statistics, establishing the scales' inner consistency and some comparisons. Results were checked in our original and multiple imputed datasets as well. According to our results the reliability of our scale was much worse than in the original study (Cronbach alpha: 0.61 versus 0.88). During the detailed item-analysis it became clear that two items contributed to the observed decreased coherence. We assumed a problem related to misreading in case of one of these items. This assumption was checked by cross-analysis by the assumed reading level. According to our results the reliability of the scale was increased in both the lower and higher education level groups if we did not include one or both of these problematic items. However, as the number of items decreased, the relative sensitivity of the scale was also reduced, with fewer persons categorized in the risk group compared to the original scale. We suggest for the authors as an alternative solution to redefine the problematic items and retest the reliability of the measurement in a sample with diverse socio-demographic characteristics.

  3. Insights and Challenges to Integrating Data from Diverse Ecological Networks

    NASA Astrophysics Data System (ADS)

    Peters, D. P. C.

    2014-12-01

    Many of the most dramatic and surprising effects of global change occur across large spatial extents, from regions to continents, that impact multiple ecosystem types across a range of interacting spatial and temporal scales. The ability of ecologists and inter-disciplinary scientists to understand and predict these dynamics depend, in large part, on existing site-based research infrastructures that developed in response to historic events. Integrating these diverse sources of data is critical to addressing these broad-scale questions. A conceptual approach is presented to synthesize and integrate diverse sources and types of data from different networks of research sites. This approach focuses on developing derived data products through spatial and temporal aggregation that allow datasets collected with different methods to be compared. The approach is illustrated through the integration, analysis, and comparison of hundreds of long-term datasets from 50 ecological sites in the US that represent ecosystem types commonly found globally. New insights were found by comparing multiple sites using common derived data. In addition to "bringing to light" many dark data in a standardized, open access, easy-to-use format, a suite of lessons were learned that can be applied to up and coming research networks in the US and internationally. These lessons will be described along with the challenges, including cyber-infrastructure, cultural, and behavioral constraints associated with the use of big and little data, that may keep ecologists and inter-disciplinary scientists from taking full advantage of the vast amounts of existing and yet-to-be exposed data.

  4. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  5. Science Competencies That Go Unassessed

    ERIC Educational Resources Information Center

    Gilmer, Penny J.; Sherdan, Danielle M.; Oosterhof, Albert; Rohani, Faranak; Rouby, Aaron

    2011-01-01

    Present large-scale assessments require the use of item formats, such as multiple choice, that can be administered and scored efficiently. This limits competencies that can be measured by these assessments. An alternative approach to large-scale assessments is being investigated that would include the use of complex performance assessments. As…

  6. Engineering Digestion: Multiscale Processes of Food Digestion.

    PubMed

    Bornhorst, Gail M; Gouseti, Ourania; Wickham, Martin S J; Bakalis, Serafim

    2016-03-01

    Food digestion is a complex, multiscale process that has recently become of interest to the food industry due to the developing links between food and health or disease. Food digestion can be studied by using either in vitro or in vivo models, each having certain advantages or disadvantages. The recent interest in food digestion has resulted in a large number of studies in this area, yet few have provided an in-depth, quantitative description of digestion processes. To provide a framework to develop these quantitative comparisons, a summary is given here between digestion processes and parallel unit operations in the food and chemical industry. Characterization parameters and phenomena are suggested for each step of digestion. In addition to the quantitative characterization of digestion processes, the multiscale aspect of digestion must also be considered. In both food systems and the gastrointestinal tract, multiple length scales are involved in food breakdown, mixing, absorption. These different length scales influence digestion processes independently as well as through interrelated mechanisms. To facilitate optimized development of functional food products, a multiscale, engineering approach may be taken to describe food digestion processes. A framework for this approach is described in this review, as well as examples that demonstrate the importance of process characterization as well as the multiple, interrelated length scales in the digestion process. © 2016 Institute of Food Technologists®

  7. Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldenson, N.; Mauger, G.; Leung, L. R.

    Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less

  8. LLNL-G3Dv3: Global P wave tomography model for improved regional and teleseismic travel time prediction: LLNL-G3DV3---GLOBAL P WAVE TOMOGRAPHY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, N. A.; Myers, S. C.; Johannesson, G.

    [1] We develop a global-scale P wave velocity model (LLNL-G3Dv3) designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The model provides a new image of Earth's interior, but the underlying practical purpose of the model is to provide enhanced seismic event location capabilities. The LLNL-G3Dv3 model is based on ∼2.8 millionP and Pnarrivals that are re-processed using our global multiple-event locator called Bayesloc. We construct LLNL-G3Dv3 within a spherical tessellation based framework, allowing for explicit representation of undulating and discontinuous layers including the crust and transition zone layers. Using a multiscale inversion technique, regional trendsmore » as well as fine details are captured where the data allow. LLNL-G3Dv3 exhibits large-scale structures including cratons and superplumes as well numerous complex details in the upper mantle including within the transition zone. Particularly, the model reveals new details of a vast network of subducted slabs trapped within the transition beneath much of Eurasia, including beneath the Tibetan Plateau. We demonstrate the impact of Bayesloc multiple-event location on the resulting tomographic images through comparison with images produced without the benefit of multiple-event constraints (single-event locations). We find that the multiple-event locations allow for better reconciliation of the large set of direct P phases recorded at 0–97° distance and yield a smoother and more continuous image relative to the single-event locations. Travel times predicted from a 3-D model are also found to be strongly influenced by the initial locations of the input data, even when an iterative inversion/relocation technique is employed.« less

  9. A worldwide analysis of the impact of forest cover change on annual runoff across multiple spatial scales

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Liu, S.

    2017-12-01

    Despite extensive studies on hydrological responses to forest cover change in small watersheds, the hydrological responses to forest change and associated mechanisms across multiple spatial scales have not been fully understood. This review thus examined about 312 watersheds worldwide to provide a generalized framework to evaluate hydrological responses to forest cover change and to identify the contribution of spatial scale, climate, forest type and hydrological regime in determining the intensity of forest change related hydrological responses in small (<1000 km2) and large watersheds (≥1000 km2). Key findings include: 1) the increase in annual runoff associated with forest cover loss is statistically significant at multiple spatial scales whereas the effect of forest cover gain is statistically inconsistent; 2) the sensitivity of annual runoff to forest cover change tends to attenuate as watershed size increases only in large watersheds; 3) annual runoff is more sensitive to forest cover change in water-limited watersheds than in energy-limited watersheds across all spatial scales; and 4) small mixed forest-dominated watersheds or large snow-dominated watersheds are more hydrologically resilient to forest cover change. These findings improve the understanding of hydrological response to forest cover change at different spatial scales and provide a scientific underpinning to future watershed management in the context of climate change and increasing anthropogenic disturbances.

  10. Efficient algorithms for fast integration on large data sets from multiple sources.

    PubMed

    Mi, Tian; Rajasekaran, Sanguthevar; Aseltine, Robert

    2012-06-28

    Recent large scale deployments of health information technology have created opportunities for the integration of patient medical records with disparate public health, human service, and educational databases to provide comprehensive information related to health and development. Data integration techniques, which identify records belonging to the same individual that reside in multiple data sets, are essential to these efforts. Several algorithms have been proposed in the literatures that are adept in integrating records from two different datasets. Our algorithms are aimed at integrating multiple (in particular more than two) datasets efficiently. Hierarchical clustering based solutions are used to integrate multiple (in particular more than two) datasets. Edit distance is used as the basic distance calculation, while distance calculation of common input errors is also studied. Several techniques have been applied to improve the algorithms in terms of both time and space: 1) Partial Construction of the Dendrogram (PCD) that ignores the level above the threshold; 2) Ignoring the Dendrogram Structure (IDS); 3) Faster Computation of the Edit Distance (FCED) that predicts the distance with the threshold by upper bounds on edit distance; and 4) A pre-processing blocking phase that limits dynamic computation within each block. We have experimentally validated our algorithms on large simulated as well as real data. Accuracy and completeness are defined stringently to show the performance of our algorithms. In addition, we employ a four-category analysis. Comparison with FEBRL shows the robustness of our approach. In the experiments we conducted, the accuracy we observed exceeded 90% for the simulated data in most cases. 97.7% and 98.1% accuracy were achieved for the constant and proportional threshold, respectively, in a real dataset of 1,083,878 records.

  11. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    ERIC Educational Resources Information Center

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  12. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    DOE PAGES

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less

  13. Different aspects of multiplicity distribution of shower particles in central collisions with AgBr target

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Swarnapratim; Haiduc, Maria; Neagu, Alina Tania; Firu, Elena

    Different aspects like multiplicity moments, Dq moments and multiplicity fluctuations in terms of scaled variance of the shower particles has been carried out for central events of 16O-AgBr, 22Ne-AgBr and 32S-AgBr interactions at (4.1-4.5) AGeV/c. Comparison of our results with different experimental analysis of central collisions of emulsion data has been performed whenever available.

  14. Multiple Scales of Control on the Structure and Spatial Distribution of Woody Vegetation in African Savanna Watersheds

    PubMed Central

    Vaughn, Nicholas R.; Asner, Gregory P.; Smit, Izak P. J.; Riddel, Edward S.

    2015-01-01

    Factors controlling savanna woody vegetation structure vary at multiple spatial and temporal scales, and as a consequence, unraveling their combined effects has proven to be a classic challenge in savanna ecology. We used airborne LiDAR (light detection and ranging) to map three-dimensional woody vegetation structure throughout four savanna watersheds, each contrasting in geologic substrate and climate, in Kruger National Park, South Africa. By comparison of the four watersheds, we found that geologic substrate had a stronger effect than climate in determining watershed-scale differences in vegetation structural properties, including cover, height and crown density. Generalized Linear Models were used to assess the spatial distribution of woody vegetation structural properties, including cover, height and crown density, in relation to mapped hydrologic, topographic and fire history traits. For each substrate and climate combination, models incorporating topography, hydrology and fire history explained up to 30% of the remaining variation in woody canopy structure, but inclusion of a spatial autocovariate term further improved model performance. Both crown density and the cover of shorter woody canopies were determined more by unknown factors likely to be changing on smaller spatial scales, such as soil texture, herbivore abundance or fire behavior, than by our mapped regional-scale changes in topography and hydrology. We also detected patterns in spatial covariance at distances up to 50–450 m, depending on watershed and structural metric. Our results suggest that large-scale environmental factors play a smaller role than is often attributed to them in determining woody vegetation structure in southern African savannas. This highlights the need for more spatially-explicit, wide-area analyses using high resolution remote sensing techniques. PMID:26660502

  15. Multiple Scales of Control on the Structure and Spatial Distribution of Woody Vegetation in African Savanna Watersheds.

    PubMed

    Vaughn, Nicholas R; Asner, Gregory P; Smit, Izak P J; Riddel, Edward S

    2015-01-01

    Factors controlling savanna woody vegetation structure vary at multiple spatial and temporal scales, and as a consequence, unraveling their combined effects has proven to be a classic challenge in savanna ecology. We used airborne LiDAR (light detection and ranging) to map three-dimensional woody vegetation structure throughout four savanna watersheds, each contrasting in geologic substrate and climate, in Kruger National Park, South Africa. By comparison of the four watersheds, we found that geologic substrate had a stronger effect than climate in determining watershed-scale differences in vegetation structural properties, including cover, height and crown density. Generalized Linear Models were used to assess the spatial distribution of woody vegetation structural properties, including cover, height and crown density, in relation to mapped hydrologic, topographic and fire history traits. For each substrate and climate combination, models incorporating topography, hydrology and fire history explained up to 30% of the remaining variation in woody canopy structure, but inclusion of a spatial autocovariate term further improved model performance. Both crown density and the cover of shorter woody canopies were determined more by unknown factors likely to be changing on smaller spatial scales, such as soil texture, herbivore abundance or fire behavior, than by our mapped regional-scale changes in topography and hydrology. We also detected patterns in spatial covariance at distances up to 50-450 m, depending on watershed and structural metric. Our results suggest that large-scale environmental factors play a smaller role than is often attributed to them in determining woody vegetation structure in southern African savannas. This highlights the need for more spatially-explicit, wide-area analyses using high resolution remote sensing techniques.

  16. Model for CO2 leakage including multiple geological layers and multiple leaky wells.

    PubMed

    Nordbotten, Jan M; Kavetski, Dmitri; Celia, Michael A; Bachu, Stefan

    2009-02-01

    Geological storage of carbon dioxide (CO2) is likely to be an integral component of any realistic plan to reduce anthropogenic greenhouse gas emissions. In conjunction with large-scale deployment of carbon storage as a technology, there is an urgent need for tools which provide reliable and quick assessments of aquifer storage performance. Previously, abandoned wells from over a century of oil and gas exploration and production have been identified as critical potential leakage paths. The practical importance of abandoned wells is emphasized by the correlation of heavy CO2 emitters (typically associated with industrialized areas) to oil and gas producing regions in North America. Herein, we describe a novel framework for predicting the leakage from large numbers of abandoned wells, forming leakage paths connecting multiple subsurface permeable formations. The framework is designed to exploit analytical solutions to various components of the problem and, ultimately, leads to a grid-free approximation to CO2 and brine leakage rates, as well as fluid distributions. We apply our model in a comparison to an established numerical solverforthe underlying governing equations. Thereafter, we demonstrate the capabilities of the model on typical field data taken from the vicinity of Edmonton, Alberta. This data set consists of over 500 wells and 7 permeable formations. Results show the flexibility and utility of the solution methods, and highlight the role that analytical and semianalytical solutions can play in this important problem.

  17. Operating Reserves and Wind Power Integration: An International Comparison; Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milligan, M.; Donohoo, P.; Lew, D.

    2010-10-01

    This paper provides a high-level international comparison of methods and key results from both operating practice and integration analysis, based on an informal International Energy Agency Task 25: Large-scale Wind Integration.

  18. Comparing Shock geometry from MHD simulation to that from the Q/A-scaling analysis

    NASA Astrophysics Data System (ADS)

    Li, G.; Zhao, L.; Jin, M.

    2017-12-01

    In large SEP events, ions can be accelerated at CME-driven shocks to very high energies. Spectra of heavy ions in many large SEP events show features such as roll-overs or spectral breaks. In some events when the spectra are plotted in energy/nucleon they can be shifted relatively to each other so that the spectra align. The amount of shift is charge-to-mass ratio (Q/A) dependent and varies from event to event. In the work of Li et al. (2009), the Q/A dependences of the scaling is related to shock geometry when the CME-driven shock is close to the Sun. For events where multiple in-situ spacecraft observations exist, one may expect that different spacecraft are connected to different portions of the CME-driven shock that have different shock geometries, therefore yielding different Q/A dependence. At the same time, shock geometry can be also obtained from MHD simulations. This means we can compare shock geometry from two completely different approaches: one from MHD simulation and the other from in-situ spectral fitting. In this work, we examine this comparison for selected events.

  19. Comparative analysis of European wide marine ecosystem shifts: a large-scale approach for developing the basis for ecosystem-based management.

    PubMed

    Möllmann, Christian; Conversi, Alessandra; Edwards, Martin

    2011-08-23

    Abrupt and rapid ecosystem shifts (where major reorganizations of food-web and community structures occur), commonly termed regime shifts, are changes between contrasting and persisting states of ecosystem structure and function. These shifts have been increasingly reported for exploited marine ecosystems around the world from the North Pacific to the North Atlantic. Understanding the drivers and mechanisms leading to marine ecosystem shifts is crucial in developing adaptive management strategies to achieve sustainable exploitation of marine ecosystems. An international workshop on a comparative approach to analysing these marine ecosystem shifts was held at Hamburg University, Institute for Hydrobiology and Fisheries Science, Germany on 1-3 November 2010. Twenty-seven scientists from 14 countries attended the meeting, representing specialists from seven marine regions, including the Baltic Sea, the North Sea, the Barents Sea, the Black Sea, the Mediterranean Sea, the Bay of Biscay and the Scotian Shelf off the Canadian East coast. The goal of the workshop was to conduct the first large-scale comparison of marine ecosystem regime shifts across multiple regional areas, in order to support the development of ecosystem-based management strategies. This journal is © 2011 The Royal Society

  20. Independently dated paleomagnetic secular variation records from the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Haberzettl, Torsten; Henkel, Karoline; Kasper, Thomas; Ahlborn, Marieke; Su, Youliang; Wang, Junbo; Appel, Erwin; St-Onge, Guillaume; Stoner, Joseph; Daut, Gerhard; Zhu, Liping; Mäusbacher, Roland

    2015-04-01

    Magnetostratigraphy has been serving as a valuable tool for dating and confirming chronologies of lacustrine sediments in many parts of the world. Suitable paleomagnetic records on the Tibetan Plateau (TP) and adjacent areas are, however, extremely scarce. Here, we derive paleomagnetic records from independently radiocarbon-dated sediments from two lakes separated by 250 km on the southern central TP, Tangra Yumco and Taro Co. Studied through alternating field demagnetization of u-channel samples, characteristic remanent magnetization (ChRM) directions document similar inclination patterns in multiple sediment cores for the past 4000 years. Comparisons to an existing record from Nam Co, a lake 350 km east of Tangra Yumco, a varve-dated record from the Makran Accretionary Wedge, records from Lakes Issyk-Kul and Baikal, and a stack record from East Asia reveal many similarities in inclination. This regional similarity demonstrates the high potential of inclination to compare records over the Tibetan Plateau and eventually date other Tibetan records stratigraphically. PSV similarities over such a large area (>3000 km) suggest a large-scale core dynamic origin rather than small scale processes like drift of the non-dipole field often associated with PSV records.

  1. Fire management over large landscapes: a hierarchical approach

    Treesearch

    Kenneth G. Boykin

    2008-01-01

    Management planning for fires becomes increasingly difficult as scale increases. Stratification provides land managers with multiple scales in which to prepare plans. Using statistical techniques, Geographic Information Systems (GIS), and meetings with land managers, we divided a large landscape of over 2 million acres (White Sands Missile Range) into parcels useful in...

  2. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  3. Framing Innovation: The Role of Distributed Leadership in Gaining Acceptance of Large-Scale Technology Initiatives

    ERIC Educational Resources Information Center

    Turner, Henry J.

    2014-01-01

    This dissertation of practice utilized a multiple case-study approach to examine distributed leadership within five school districts that were attempting to gain acceptance of a large-scale 1:1 technology initiative. Using frame theory and distributed leadership theory as theoretical frameworks, this study interviewed each district's…

  4. Plasmonic nanoparticle lithography: Fast resist-free laser technique for large-scale sub-50 nm hole array fabrication

    NASA Astrophysics Data System (ADS)

    Pan, Zhenying; Yu, Ye Feng; Valuckas, Vytautas; Yap, Sherry L. K.; Vienne, Guillaume G.; Kuznetsov, Arseniy I.

    2018-05-01

    Cheap large-scale fabrication of ordered nanostructures is important for multiple applications in photonics and biomedicine including optical filters, solar cells, plasmonic biosensors, and DNA sequencing. Existing methods are either expensive or have strict limitations on the feature size and fabrication complexity. Here, we present a laser-based technique, plasmonic nanoparticle lithography, which is capable of rapid fabrication of large-scale arrays of sub-50 nm holes on various substrates. It is based on near-field enhancement and melting induced under ordered arrays of plasmonic nanoparticles, which are brought into contact or in close proximity to a desired material and acting as optical near-field lenses. The nanoparticles are arranged in ordered patterns on a flexible substrate and can be attached and removed from the patterned sample surface. At optimized laser fluence, the nanohole patterning process does not create any observable changes to the nanoparticles and they have been applied multiple times as reusable near-field masks. This resist-free nanolithography technique provides a simple and cheap solution for large-scale nanofabrication.

  5. Data-based discharge extrapolation: estimating annual discharge for a partially gauged large river basin from its small sub-basins

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2013-12-01

    Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.

  6. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  7. Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions

    NASA Technical Reports Server (NTRS)

    Grujic, L. T.; Siljak, D. D.

    1973-01-01

    The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.

  8. A psycholinguistic database for traditional Chinese character naming.

    PubMed

    Chang, Ya-Ning; Hsu, Chun-Hsien; Tsai, Jie-Li; Chen, Chien-Liang; Lee, Chia-Ying

    2016-03-01

    In this study, we aimed to provide a large-scale set of psycholinguistic norms for 3,314 traditional Chinese characters, along with their naming reaction times (RTs), collected from 140 Chinese speakers. The lexical and semantic variables in the database include frequency, regularity, familiarity, consistency, number of strokes, homophone density, semantic ambiguity rating, phonetic combinability, semantic combinability, and the number of disyllabic compound words formed by a character. Multiple regression analyses were conducted to examine the predictive powers of these variables for the naming RTs. The results demonstrated that these variables could account for a significant portion of variance (55.8%) in the naming RTs. An additional multiple regression analysis was conducted to demonstrate the effects of consistency and character frequency. Overall, the regression results were consistent with the findings of previous studies on Chinese character naming. This database should be useful for research into Chinese language processing, Chinese education, or cross-linguistic comparisons. The database can be accessed via an online inquiry system (http://ball.ling.sinica.edu.tw/namingdatabase/index.html).

  9. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  10. Idealized modeling of convective organization with changing sea surface temperatures using multiple equilibria in weak temperature gradient simulations

    NASA Astrophysics Data System (ADS)

    Sentić, Stipo; Sessions, Sharon L.

    2017-06-01

    The weak temperature gradient (WTG) approximation is a method of parameterizing the influences of the large scale on local convection in limited domain simulations. WTG simulations exhibit multiple equilibria in precipitation; depending on the initial moisture content, simulations can precipitate or remain dry for otherwise identical boundary conditions. We use a hypothesized analogy between multiple equilibria in precipitation in WTG simulations, and dry and moist regions of organized convection to study tropical convective organization. We find that the range of wind speeds that support multiple equilibria depends on sea surface temperature (SST). Compared to the present SST, low SSTs support a narrower range of multiple equilibria at higher wind speeds. In contrast, high SSTs exhibit a narrower range of multiple equilibria at low wind speeds. This suggests that at high SSTs, organized convection might occur with lower surface forcing. To characterize convection at different SSTs, we analyze the change in relationships between precipitation rate, atmospheric stability, moisture content, and the large-scale transport of moist entropy and moisture with increasing SSTs. We find an increase in large-scale export of moisture and moist entropy from dry simulations with increasing SST, which is consistent with a strengthening of the up-gradient transport of moisture from dry regions to moist regions in organized convection. Furthermore, the changes in diagnostic relationships with SST are consistent with more intense convection in precipitating regions of organized convection for higher SSTs.

  11. Nonparametric Bayesian Multiple Imputation for Incomplete Categorical Variables in Large-Scale Assessment Surveys

    ERIC Educational Resources Information Center

    Si, Yajuan; Reiter, Jerome P.

    2013-01-01

    In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…

  12. Seeing the forest for the trees: hybridity and social-ecological symbols, rituals and resilience in postdisaster contexts

    Treesearch

    Keith G. Tidball

    2014-01-01

    The role of community-based natural resources management in the form of "greening" after large scale system shocks and surprises is argued to provide multiple benefits via engagement with living elements of social-ecological systems and subsequent enhanced resilience at multiple scales. The importance of so-called social-ecological symbols, especially the...

  13. Stability of large-scale systems.

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1972-01-01

    The purpose of this paper is to present the results obtained in stability study of large-scale systems based upon the comparison principle and vector Liapunov functions. The exposition is essentially self-contained, with emphasis on recent innovations which utilize explicit information about the system structure. This provides a natural foundation for the stability theory of dynamic systems under structural perturbations.

  14. Functional Connectivity in Multiple Cortical Networks Is Associated with Performance Across Cognitive Domains in Older Adults.

    PubMed

    Shaw, Emily E; Schultz, Aaron P; Sperling, Reisa A; Hedden, Trey

    2015-10-01

    Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65-90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging.

  15. Multi-scale occupancy estimation and modelling using multiple detection methods

    USGS Publications Warehouse

    Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.

    2008-01-01

    Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.

  16. High Agreement was Obtained Across Scores from Multiple Equated Scales for Social Anxiety Disorder using Item Response Theory.

    PubMed

    Sunderland, Matthew; Batterham, Philip; Calear, Alison; Carragher, Natacha; Baillie, Andrew; Slade, Tim

    2018-04-10

    There is no standardized approach to the measurement of social anxiety. Researchers and clinicians are faced with numerous self-report scales with varying strengths, weaknesses, and psychometric properties. The lack of standardization makes it difficult to compare scores across populations that utilise different scales. Item response theory offers one solution to this problem via equating different scales using an anchor scale to set a standardized metric. This study is the first to equate several scales for social anxiety disorder. Data from two samples (n=3,175 and n=1,052), recruited from the Australian community using online advertisements, were utilised to equate a network of 11 self-report social anxiety scales via a fixed parameter item calibration method. Comparisons between actual and equated scores for most of the scales indicted a high level of agreement with mean differences <0.10 (equivalent to a mean difference of less than one point on the standardized metric). This study demonstrates that scores from multiple scales that measure social anxiety can be converted to a common scale. Re-scoring observed scores to a common scale provides opportunities to combine research from multiple studies and ultimately better assess social anxiety in treatment and research settings. Copyright © 2018. Published by Elsevier Inc.

  17. Circulation and multiple-scale variability in the Southern California Bight

    NASA Astrophysics Data System (ADS)

    Dong, Changming; Idica, Eileen Y.; McWilliams, James C.

    2009-09-01

    The oceanic circulation in the Southern California Bight (SCB) is influenced by the large-scale California Current offshore, tropical remote forcing through the coastal wave guide alongshore, and local atmospheric forcing. The region is characterized by local complexity in the topography and coastline. All these factors engender variability in the circulation on interannual, seasonal, and intraseasonal time scales. This study applies the Regional Oceanic Modeling System (ROMS) to the SCB circulation and its multiple-scale variability. The model is configured in three levels of nested grids with the parent grid covering the whole US West Coast. The first child grid covers a large southern domain, and the third grid zooms in on the SCB region. The three horizontal grid resolutions are 20 km, 6.7 km, and 1 km, respectively. The external forcings are momentum, heat, and freshwater flux at the surface and adaptive nudging to gyre-scale SODA reanalysis fields at the boundaries. The momentum flux is from a three-hourly reanalysis mesoscale MM5 wind with a 6 km resolution for the finest grid in the SCB. The oceanic model starts in an equilibrium state from a multiple-year cyclical climatology run, and then it is integrated from years 1996 through 2003. In this paper, the 8-year simulation at the 1 km resolution is analyzed and assessed against extensive observational data: High-Frequency (HF) radar data, current meters, Acoustic Doppler Current Profilers (ADCP) data, hydrographic measurements, tide gauges, drifters, altimeters, and radiometers. The simulation shows that the domain-scale surface circulation in the SCB is characterized by the Southern California Cyclonic Gyre, comprised of the offshore equatorward California Current System and the onshore poleward Southern California Countercurrent. The simulation also exhibits three subdomain-scale, persistent ( i.e., standing), cyclonic eddies related to the local topography and wind forcing: the Santa Barbara Channel Eddy, the Central-SCB Eddy, and the Catalina-Clemente Eddy. Comparisons with observational data reveal that ROMS reproduces a realistic mean state of the SCB oceanic circulation, as well as its interannual (mainly as a local manifestation of an ENSO event), seasonal, and intraseasonal (eddy-scale) variations. We find high correlations of the wind curl with both the alongshore pressure gradient (APG) and the eddy kinetic energy level in their variations on time scales of seasons and longer. The geostrophic currents are much stronger than the wind-driven Ekman flows at the surface. The model exhibits intrinsic eddy variability with strong topographically related heterogeneity, westward-propagating Rossby waves, and poleward-propagating coastally-trapped waves (albeit with smaller amplitude than observed due to missing high-frequency variations in the southern boundary conditions).

  18. Blazing Signature Filter: a library for fast pairwise similarity comparisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon-Yong; Fujimoto, Grant M.; Wilson, Ryan

    Identifying similarities between datasets is a fundamental task in data mining and has become an integral part of modern scientific investigation. Whether the task is to identify co-expressed genes in large-scale expression surveys or to predict combinations of gene knockouts which would elicit a similar phenotype, the underlying computational task is often a multi-dimensional similarity test. As datasets continue to grow, improvements to the efficiency, sensitivity or specificity of such computation will have broad impacts as it allows scientists to more completely explore the wealth of scientific data. A significant practical drawback of large-scale data mining is the vast majoritymore » of pairwise comparisons are unlikely to be relevant, meaning that they do not share a signature of interest. It is therefore essential to efficiently identify these unproductive comparisons as rapidly as possible and exclude them from more time-intensive similarity calculations. The Blazing Signature Filter (BSF) is a highly efficient pairwise similarity algorithm which enables extensive data mining within a reasonable amount of time. The algorithm transforms datasets into binary metrics, allowing it to utilize the computationally efficient bit operators and provide a coarse measure of similarity. As a result, the BSF can scale to high dimensionality and rapidly filter unproductive pairwise comparison. Two bioinformatics applications of the tool are presented to demonstrate the ability to scale to billions of pairwise comparisons and the usefulness of this approach.« less

  19. Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions

    DOE PAGES

    McLerran, Larry; Tribedy, Prithwish

    2015-11-02

    High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less

  20. Geospatial optimization of siting large-scale solar projects

    USGS Publications Warehouse

    Macknick, Jordan; Quinby, Ted; Caulfield, Emmet; Gerritsen, Margot; Diffendorfer, James E.; Haines, Seth S.

    2014-01-01

    guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  1. A Multi-Cultural Comparison of the Factor Structure of the MIDAS for Adults/College Students.

    ERIC Educational Resources Information Center

    Jones, James A.

    The Multiple Intelligences Developmental Assessment Scales (MIDAS) instrument was developed to measure eight constructs of intelligence. The 119-item MIDAS provides scores for 26 subscales in addition to the 8 major scales. Using the 26 subscales, a factor structure was developed on half of a U.S. sample of college students (n=834), while the…

  2. A Comparison of Methods to Screen Middle School Students for Reading and Math Difficulties

    ERIC Educational Resources Information Center

    Nelson, Peter M.; Van Norman, Ethan R.; Lackner, Stacey K.

    2016-01-01

    The current study explored multiple ways in which middle schools can use and integrate data sources to predict proficiency on future high-stakes state achievement tests. The diagnostic accuracy of (a) prior achievement data, (b) teacher rating scale scores, (c) a composite score combining state test scores and rating scale responses, and (d) two…

  3. Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.

    PubMed

    Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner

    2016-01-01

    Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Physical activity correlates with neurological impairment and disability in multiple sclerosis.

    PubMed

    Motl, Robert W; Snook, Erin M; Wynn, Daniel R; Vollmer, Timothy

    2008-06-01

    This study examined the correlation of physical activity with neurological impairment and disability in persons with multiple sclerosis (MS). Eighty individuals with MS wore an accelerometer for 7 days and completed the Symptom Inventory (SI), Performance Scales (PS), and Expanded Disability Status Scale. There were large negative correlations between the accelerometer and SI (r = -0.56; rho = -0.58) and Expanded Disability Status Scale (r = -0.60; rho = -0.69) and a moderate negative correlation between the accelerometer and PS (r = -0.39; rho = -0.48) indicating that physical activity was associated with reduced neurological impairment and disability. Such findings provide a preliminary basis for using an accelerometer and the SI and PS as outcome measures in large-scale prospective and experimental examinations of the effect of physical activity behavior on disability and dependence in MS.

  5. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  6. Methods for measuring denitrification: Diverse approaches to a difficult problem

    USGS Publications Warehouse

    Groffman, Peter M; Altabet, Mary A.; Böhlke, J.K.; Butterbach-Bahl, Klaus; David, Mary B.; Firestone, Mary K.; Giblin, Anne E.; Kana, Todd M.; Nielsen , Lars Peter; Voytek, Mary A.

    2006-01-01

    Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3−) and nitrite (NO2−), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.

  7. Model simulations and proxy-based reconstructions for the European region in the past millennium (Invited)

    NASA Astrophysics Data System (ADS)

    Zorita, E.

    2009-12-01

    One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.

  8. Unsteady loads due to propulsive lift configurations. Part A: Investigation of scaling laws

    NASA Technical Reports Server (NTRS)

    Morton, J. B.; Haviland, J. K.

    1978-01-01

    This study covered scaling laws, and pressure measurements made to determine details of the large scale jet structure and to verify scaling laws by direct comparison. The basis of comparison was a test facility at NASA Langley in which a JT-15D exhausted over a boilerplater airfoil surface to reproduce upper surface blowing conditions. A quarter scale model was built of this facility, using cold jets. A comparison between full scale and model pressure coefficient spectra, presented as functions of Strouhal numbers, showed fair agreement, however, a shift of spectral peaks was noted. This was not believed to be due to Mach number or Reynolds number effects, but did appear to be traceable to discrepancies in jet temperatures. A correction for jet temperature was then tried, similar to one used for far field noise prediction. This was found to correct the spectral peak discrepancy.

  9. Global map of physical interactions among differentially expressed genes in multiple sclerosis relapses and remissions.

    PubMed

    Tuller, Tamir; Atar, Shimshi; Ruppin, Eytan; Gurevich, Michael; Achiron, Anat

    2011-09-15

    Multiple sclerosis (MS) is a central nervous system autoimmune inflammatory T-cell-mediated disease with a relapsing-remitting course in the majority of patients. In this study, we performed a high-resolution systems biology analysis of gene expression and physical interactions in MS relapse and remission. To this end, we integrated 164 large-scale measurements of gene expression in peripheral blood mononuclear cells of MS patients in relapse or remission and healthy subjects, with large-scale information about the physical interactions between these genes obtained from public databases. These data were analyzed with a variety of computational methods. We find that there is a clear and significant global network-level signal that is related to the changes in gene expression of MS patients in comparison to healthy subjects. However, despite the clear differences in the clinical symptoms of MS patients in relapse versus remission, the network level signal is weaker when comparing patients in these two stages of the disease. This result suggests that most of the genes have relatively similar expression levels in the two stages of the disease. In accordance with previous studies, we found that the pathways related to regulation of cell death, chemotaxis and inflammatory response are differentially expressed in the disease in comparison to healthy subjects, while pathways related to cell adhesion, cell migration and cell-cell signaling are activated in relapse in comparison to remission. However, the current study includes a detailed report of the exact set of genes involved in these pathways and the interactions between them. For example, we found that the genes TP53 and IL1 are 'network-hub' that interacts with many of the differentially expressed genes in MS patients versus healthy subjects, and the epidermal growth factor receptor is a 'network-hub' in the case of MS patients with relapse versus remission. The statistical approaches employed in this study enabled us to report new sets of genes that according to their gene expression and physical interactions are predicted to be differentially expressed in MS versus healthy subjects, and in MS patients in relapse versus remission. Some of these genes may be useful biomarkers for diagnosing MS and predicting relapses in MS patients.

  10. Calving distributions of individual bulls in multiple-sire pastures

    USDA-ARS?s Scientific Manuscript database

    The objective of this project was to quantify patterns in the calving rate of sires in multiple-sire pastures over seven years at a large-scale cow-calf operation. Data consisted of reproductive and genomic records from multiple-sire breeding pastures (n=33) at the United States Meat Animal Research...

  11. GPU-Q-J, a fast method for calculating root mean square deviation (RMSD) after optimal superposition

    PubMed Central

    2011-01-01

    Background Calculation of the root mean square deviation (RMSD) between the atomic coordinates of two optimally superposed structures is a basic component of structural comparison techniques. We describe a quaternion based method, GPU-Q-J, that is stable with single precision calculations and suitable for graphics processor units (GPUs). The application was implemented on an ATI 4770 graphics card in C/C++ and Brook+ in Linux where it was 260 to 760 times faster than existing unoptimized CPU methods. Source code is available from the Compbio website http://software.compbio.washington.edu/misc/downloads/st_gpu_fit/ or from the author LHH. Findings The Nutritious Rice for the World Project (NRW) on World Community Grid predicted de novo, the structures of over 62,000 small proteins and protein domains returning a total of 10 billion candidate structures. Clustering ensembles of structures on this scale requires calculation of large similarity matrices consisting of RMSDs between each pair of structures in the set. As a real-world test, we calculated the matrices for 6 different ensembles from NRW. The GPU method was 260 times faster that the fastest existing CPU based method and over 500 times faster than the method that had been previously used. Conclusions GPU-Q-J is a significant advance over previous CPU methods. It relieves a major bottleneck in the clustering of large numbers of structures for NRW. It also has applications in structure comparison methods that involve multiple superposition and RMSD determination steps, particularly when such methods are applied on a proteome and genome wide scale. PMID:21453553

  12. A Comparison of Linking Methods for Estimating National Trends in International Comparative Large-Scale Assessments in the Presence of Cross-national DIF

    ERIC Educational Resources Information Center

    Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole

    2016-01-01

    Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…

  13. Large-Scale Dynamics of the Magnetospheric Boundary: Comparisons between Global MHD Simulation Results and ISTP Observations

    NASA Technical Reports Server (NTRS)

    Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.

    1998-01-01

    Understanding the large-scale dynamics of the magnetospheric boundary is an important step towards achieving the ISTP mission's broad objective of assessing the global transport of plasma and energy through the geospace environment. Our approach is based on three-dimensional global magnetohydrodynamic (MHD) simulations of the solar wind-magnetosphere- ionosphere system, and consists of using interplanetary magnetic field (IMF) and plasma parameters measured by solar wind monitors upstream of the bow shock as input to the simulations for predicting the large-scale dynamics of the magnetospheric boundary. The validity of these predictions is tested by comparing local data streams with time series measured by downstream spacecraft crossing the magnetospheric boundary. In this paper, we review results from several case studies which confirm that our MHD model reproduces very well the large-scale motion of the magnetospheric boundary. The first case illustrates the complexity of the magnetic field topology that can occur at the dayside magnetospheric boundary for periods of northward IMF with strong Bx and By components. The second comparison reviewed combines dynamic and topological aspects in an investigation of the evolution of the distant tail at 200 R(sub E) from the Earth.

  14. Evaluation of an index of biotic integrity approach used to assess biological condition in western U.S. streams and rivers at varying spatial scales

    USGS Publications Warehouse

    Meador, M.R.; Whittier, T.R.; Goldstein, R.M.; Hughes, R.M.; Peck, D.V.

    2008-01-01

    Consistent assessments of biological condition are needed across multiple ecoregions to provide a greater understanding of the spatial extent of environmental degradation. However, consistent assessments at large geographic scales are often hampered by lack of uniformity in data collection, analyses, and interpretation. The index of biotic integrity (IBI) has been widely used in eastern and central North America, where fish assemblages are complex and largely composed of native species, but IBI development has been hindered in the western United States because of relatively low fish species richness and greater relative abundance of alien fishes. Approaches to developing IBIs rarely provide a consistent means of assessing biological condition across multiple ecoregions. We conducted an evaluation of IBIs recently proposed for three ecoregions of the western United States using an independent data set covering a large geographic scale. We standardized the regional IBIs and developed biological condition criteria, assessed the responsiveness of IBIs to basin-level land uses, and assessed their precision and concordance with basin-scale IBIs. Standardized IBI scores from 318 sites in the western United States comprising mountain, plains, and xeric ecoregions were significantly related to combined urban and agricultural land uses. Standard deviations and coefficients of variation revealed relatively low variation in IBI scores based on multiple sampling reaches at sites. A relatively high degree of corroboration with independent, locally developed IBIs indicates that the regional IBIs are robust across large geographic scales, providing precise and accurate assessments of biological condition for western U.S. streams. ?? Copyright by the American Fisheries Society 2008.

  15. Weakly nonparallel and curvature effects on stationary crossflow instability: Comparison of results from multiple-scales analysis and parabolized stability equations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.; Choudhari, Meelan; Li, Fei

    1995-01-01

    A multiple-scales approach is used to approximate the effects of nonparallelism and streamwise surface curvature on the growth of stationary crossflow vortices in incompressible, three-dimesional boundary layers. The results agree with results predicted by solving the parabolized stability equations in regions where the nonparallelism is sufficiently weak. As the nonparallelism increases, the agreement between the two approaches worsens. An attempt has been made to quantify the nonparallelism on flow stability in terms of a nondimensional number that describes the rate of change of the mean flow relative to the disturbance wavelength. We find that the above nondimensional number provides useful information about the adequacy of the multiple-scales approximation for different disturbances for a given flow geometry, but the number does not collapse data for different flow geometries onto a single curve.

  16. Progressive structure-based alignment of homologous proteins: Adopting sequence comparison strategies.

    PubMed

    Joseph, Agnel Praveen; Srinivasan, Narayanaswamy; de Brevern, Alexandre G

    2012-09-01

    Comparison of multiple protein structures has a broad range of applications in the analysis of protein structure, function and evolution. Multiple structure alignment tools (MSTAs) are necessary to obtain a simultaneous comparison of a family of related folds. In this study, we have developed a method for multiple structure comparison largely based on sequence alignment techniques. A widely used Structural Alphabet named Protein Blocks (PBs) was used to transform the information on 3D protein backbone conformation as a 1D sequence string. A progressive alignment strategy similar to CLUSTALW was adopted for multiple PB sequence alignment (mulPBA). Highly similar stretches identified by the pairwise alignments are given higher weights during the alignment. The residue equivalences from PB based alignments are used to obtain a three dimensional fit of the structures followed by an iterative refinement of the structural superposition. Systematic comparisons using benchmark datasets of MSTAs underlines that the alignment quality is better than MULTIPROT, MUSTANG and the alignments in HOMSTRAD, in more than 85% of the cases. Comparison with other rigid-body and flexible MSTAs also indicate that mulPBA alignments are superior to most of the rigid-body MSTAs and highly comparable to the flexible alignment methods. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  17. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  18. Productive potential of cassava plants (Manihot esculenta Crantz) propagated by leaf buds.

    PubMed

    Neves, Reizaluamar J; Diniz, Rafael P; Oliveira, Eder J DE

    2018-04-23

    New techniques of rapid multiplication of cassava (Manihot esculenta Crantz) have been developed, requiring technical support for large-scale use. This work main to evaluate the agronomic performance of plantlets obtained by leaf buds technique against stem cuttings in the field conditions. The work was conducted using the randomized block design in a factorial scheme with 3 varieties (BRS Kiriris, 98150-06, 9624-09) × 4 origins of the plantlets (conventional - stem cuttings of 20 cm length, leaf buds of the upper, middle and inferior stem part) × 2 agrochemicals (control and treated). There was a remarkable decrease in some agronomic traits that ranged from 23% (number of branches) to 62% (shoot weight) when using leaf buds plantlets. The treatment of plantlets with agrochemicals promoted significant increases in all traits, ranging from 26% (number of roots per plant) to 46% (shoot weight). The plantlets originating from leaf buds of the upper and middle parts were able to generate stem-like plants similar to stem-derived ones. Despite its lower agronomic performance under field conditions, multiplication by leaf buds may generate five times the number of propagules in comparison with the conventional multiplication, and therefore it could be a viable alternative for rapid cassava multiplication.

  19. Pollutant Transport and Fate: Relations Between Flow-paths and Downstream Impacts of Human Activities

    NASA Astrophysics Data System (ADS)

    Thorslund, J.; Jarsjo, J.; Destouni, G.

    2017-12-01

    The quality of freshwater resources is increasingly impacted by human activities. Humans also extensively change the structure of landscapes, which may alter natural hydrological processes. To manage and maintain freshwater of good water quality, it is critical to understand how pollutants are released into, transported and transformed within the hydrological system. Some key scientific questions include: What are net downstream impacts of pollutants across different hydroclimatic and human disturbance conditions, and on different scales? What are the functions within and between components of the landscape, such as wetlands, on mitigating pollutant load delivery to downstream recipients? We explore these questions by synthesizing results from several relevant case study examples of intensely human-impacted hydrological systems. These case study sites have been specifically evaluated in terms of net impact of human activities on pollutant input to the aquatic system, as well as flow-path distributions trough wetlands as a potential ecosystem service of pollutant mitigation. Results shows that although individual wetlands have high retention capacity, efficient net retention effects were not always achieved at a larger landscape scale. Evidence suggests that the function of wetlands as mitigation solutions to pollutant loads is largely controlled by large-scale parallel and circular flow-paths, through which multiple wetlands are interconnected in the landscape. To achieve net mitigation effects at large scale, a large fraction of the polluted large-scale flows must be transported through multiple connected wetlands. Although such large-scale flow interactions are critical for assessing water pollution spreading and fate through the landscape, our synthesis shows a frequent lack of knowledge at such scales. We suggest ways forward for addressing the mismatch between the large scales at which key pollutant pressures and water quality changes take place and the relatively scale at which most studies and implementations are currently made. These suggestions can help bridge critical knowledge gaps, as needed for improving water quality predictions and mitigation solutions under human and environmental changes.

  20. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    PubMed Central

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  1. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    PubMed

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  2. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    PubMed

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales.

  3. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions

    PubMed Central

    Hahn, Beth A.; Dreitz, Victoria J.; George, T. Luke

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer’s sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer’s sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales. PMID:29065128

  4. The Saskatchewan River Basin - a large scale observatory for water security research (Invited)

    NASA Astrophysics Data System (ADS)

    Wheater, H. S.

    2013-12-01

    The 336,000 km2 Saskatchewan River Basin (SaskRB) in Western Canada illustrates many of the issues of Water Security faced world-wide. It poses globally-important science challenges due to the diversity in its hydro-climate and ecological zones. With one of the world's more extreme climates, it embodies environments of global significance, including the Rocky Mountains (source of the major rivers in Western Canada), the Boreal Forest (representing 30% of Canada's land area) and the Prairies (home to 80% of Canada's agriculture). Management concerns include: provision of water resources to more than three million inhabitants, including indigenous communities; balancing competing needs for water between different uses, such as urban centres, industry, agriculture, hydropower and environmental flows; issues of water allocation between upstream and downstream users in the three prairie provinces; managing the risks of flood and droughts; and assessing water quality impacts of discharges from major cities and intensive agricultural production. Superimposed on these issues is the need to understand and manage uncertain water futures, including effects of economic growth and environmental change, in a highly fragmented water governance environment. Key science questions focus on understanding and predicting the effects of land and water management and environmental change on water quantity and quality. To address the science challenges, observational data are necessary across multiple scales. This requires focussed research at intensively monitored sites and small watersheds to improve process understanding and fine-scale models. To understand large-scale effects on river flows and quality, land-atmosphere feedbacks, and regional climate, integrated monitoring, modelling and analysis is needed at large basin scale. And to support water management, new tools are needed for operational management and scenario-based planning that can be implemented across multiple scales and multiple jurisdictions. The SaskRB has therefore been developed as a large scale observatory, now a Regional Hydroclimate Project of the World Climate Research Programme's GEWEX project, and is available to contribute to the emerging North American Water Program. State-of-the-art hydro-ecological experimental sites have been developed for the key biomes, and a river and lake biogeochemical research facility, focussed on impacts of nutrients and exotic chemicals. Data are integrated at SaskRB scale to support the development of improved large scale climate and hydrological modelling products, the development of DSS systems for local, provincial and basin-scale management, and the development of related social science research, engaging stakeholders in the research and exploring their values and priorities for water security. The observatory provides multiple scales of observation and modelling required to develop: a) new climate, hydrological and ecological science and modelling tools to address environmental change in key environments, and their integrated effects and feedbacks at large catchment scale, b) new tools needed to support river basin management under uncertainty, including anthropogenic controls on land and water management and c) the place-based focus for the development of new transdisciplinary science.

  5. Toward single-chirality carbon nanotube device arrays.

    PubMed

    Vijayaraghavan, Aravind; Hennrich, Frank; Stürzl, Ninette; Engel, Michael; Ganzhorn, Marc; Oron-Carl, Matti; Marquardt, Christoph W; Dehm, Simone; Lebedkin, Sergei; Kappes, Manfred M; Krupke, Ralph

    2010-05-25

    The large-scale integration of devices consisting of individual single-walled carbon nanotubes (SWCNT), all of the same chirality, is a critical step toward their electronic, optoelectronic, and electromechanical application. Here, the authors realize two related goals, the first of which is the fabrication of high-density, single-chirality SWCNT device arrays by dielectrophoretic assembly from monodisperse SWCNT solution obtained by polymer-mediated sorting. Such arrays are ideal for correlating measurements using various techniques across multiple identical devices, which is the second goal. The arrays are characterized by voltage-contrast scanning electron microscopy, electron transport, photoluminescence (PL), and Raman spectroscopy and show identical signatures as expected for single-chirality SWCNTs. In the assembled nanotubes, a large D peak in Raman spectra, a large dark-exciton peak in PL spectra as well as lowered conductance and slow switching in electron transport are all shown to be correlated to each other. By comparison to control samples, we conclude that these are the result of scattering from electronic and not structural defects resulting from the polymer wrapping, similar to what has been predicted for DNA wrapping.

  6. Investigating measurement equivalence of visual analogue scales and Likert-type scales in Internet-based personality questionnaires.

    PubMed

    Kuhlmann, Tim; Dantlgraber, Michael; Reips, Ulf-Dietrich

    2017-12-01

    Visual analogue scales (VASs) have shown superior measurement qualities in comparison to traditional Likert-type response scales in previous studies. The present study expands the comparison of response scales to properties of Internet-based personality scales in a within-subjects design. A sample of 879 participants filled out an online questionnaire measuring Conscientiousness, Excitement Seeking, and Narcissism. The questionnaire contained all instruments in both answer scale versions in a counterbalanced design. Results show comparable reliabilities, means, and SDs for the VAS versions of the original scales, in comparison to Likert-type scales. To assess the validity of the measurements, age and gender were used as criteria, because all three constructs have shown non-zero correlations with age and gender in previous research. Both response scales showed a high overlap and the proposed relationships with age and gender. The associations were largely identical, with the exception of an increase in explained variance when predicting age from the VAS version of Excitement Seeking (B10 = 1318.95, ΔR(2) = .025). VASs showed similar properties to Likert-type response scales in most cases.

  7. CROSS-SCALE CORRELATIONS AND THE DESIGN AND ANALYSIS OF AVIAN HABITAT SELECTION STUDIES

    EPA Science Inventory

    It has long been suggested that birds select habitat hierarchically, progressing from coarser to finer spatial scales. This hypothesis, in conjunction with the realization that many organisms likely respond to environmental patterns at multiple spatial scales, has led to a large ...

  8. Multiple time scale analysis of pressure oscillations in solid rocket motors

    NASA Astrophysics Data System (ADS)

    Ahmed, Waqas; Maqsood, Adnan; Riaz, Rizwan

    2018-03-01

    In this study, acoustic pressure oscillations for single and coupled longitudinal acoustic modes in Solid Rocket Motor (SRM) are investigated using Multiple Time Scales (MTS) method. Two independent time scales are introduced. The oscillations occur on fast time scale whereas the amplitude and phase changes on slow time scale. Hopf bifurcation is employed to investigate the properties of the solution. The supercritical bifurcation phenomenon is observed for linearly unstable system. The amplitude of the oscillations result from equal energy gain and loss rates of longitudinal acoustic modes. The effect of linear instability and frequency of longitudinal modes on amplitude and phase of oscillations are determined for both single and coupled modes. For both cases, the maximum amplitude of oscillations decreases with the frequency of acoustic mode and linear instability of SRM. The comparison of analytical MTS results and numerical simulations demonstrate an excellent agreement.

  9. Model-independent and model-based local lensing properties of CL0024+1654 from multiply imaged galaxies

    NASA Astrophysics Data System (ADS)

    Wagner, Jenny; Liesenborgs, Jori; Tessore, Nicolas

    2018-04-01

    Context. Local gravitational lensing properties, such as convergence and shear, determined at the positions of multiply imaged background objects, yield valuable information on the smaller-scale lensing matter distribution in the central part of galaxy clusters. Highly distorted multiple images with resolved brightness features like the ones observed in CL0024 allow us to study these local lensing properties and to tighten the constraints on the properties of dark matter on sub-cluster scale. Aim. We investigate to what precision local magnification ratios, J, ratios of convergences, f, and reduced shears, g = (g1, g2), can be determined independently of a lens model for the five resolved multiple images of the source at zs = 1.675 in CL0024. We also determine if a comparison to the respective results obtained by the parametric modelling tool Lenstool and by the non-parametric modelling tool Grale can detect biases in the models. For these lens models, we analyse the influence of the number and location of the constraints from multiple images on the lens properties at the positions of the five multiple images of the source at zs = 1.675. Methods: Our model-independent approach uses a linear mapping between the five resolved multiple images to determine the magnification ratios, ratios of convergences, and reduced shears at their positions. With constraints from up to six multiple image systems, we generate Lenstool and Grale models using the same image positions, cosmological parameters, and number of generated convergence and shear maps to determine the local values of J, f, and g at the same positions across all methods. Results: All approaches show strong agreement on the local values of J, f, and g. We find that Lenstool obtains the tightest confidence bounds even for convergences around one using constraints from six multiple-image systems, while the best Grale model is generated only using constraints from all multiple images with resolved brightness features and adding limited small-scale mass corrections. Yet, confidence bounds as large as the values themselves can occur for convergences close to one in all approaches. Conclusions: Our results agree with previous findings, support the light-traces-mass assumption, and the merger hypothesis for CL0024. Comparing the different approaches can detect model biases. The model-independent approach determines the local lens properties to a comparable precision in less than one second.

  10. Reaction schemes visualized in network form: the syntheses of strychnine as an example.

    PubMed

    Proudfoot, John R

    2013-05-24

    Representation of synthesis sequences in a network form provides an effective method for the comparison of multiple reaction schemes and an opportunity to emphasize features such as reaction scale that are often relegated to experimental sections. An example of data formatting that allows construction of network maps in Cytoscape is presented, along with maps that illustrate the comparison of multiple reaction sequences, comparison of scaffold changes within sequences, and consolidation to highlight common key intermediates used across sequences. The 17 different synthetic routes reported for strychnine are used as an example basis set. The reaction maps presented required a significant data extraction and curation, and a standardized tabular format for reporting reaction information, if applied in a consistent way, could allow the automated combination of reaction information across different sources.

  11. Feature generation and representations for protein-protein interaction classification.

    PubMed

    Lan, Man; Tan, Chew Lim; Su, Jian

    2009-10-01

    Automatic detecting protein-protein interaction (PPI) relevant articles is a crucial step for large-scale biological database curation. The previous work adopted POS tagging, shallow parsing and sentence splitting techniques, but they achieved worse performance than the simple bag-of-words representation. In this paper, we generated and investigated multiple types of feature representations in order to further improve the performance of PPI text classification task. Besides the traditional domain-independent bag-of-words approach and the term weighting methods, we also explored other domain-dependent features, i.e. protein-protein interaction trigger keywords, protein named entities and the advanced ways of incorporating Natural Language Processing (NLP) output. The integration of these multiple features has been evaluated on the BioCreAtIvE II corpus. The experimental results showed that both the advanced way of using NLP output and the integration of bag-of-words and NLP output improved the performance of text classification. Specifically, in comparison with the best performance achieved in the BioCreAtIvE II IAS, the feature-level and classifier-level integration of multiple features improved the performance of classification 2.71% and 3.95%, respectively.

  12. Comparing multi-module connections in membrane chromatography scale-up.

    PubMed

    Yu, Zhou; Karkaria, Tishtar; Espina, Marianela; Hunjun, Manjeet; Surendran, Abera; Luu, Tina; Telychko, Julia; Yang, Yan-Ping

    2015-07-20

    Membrane chromatography is increasingly used for protein purification in the biopharmaceutical industry. Membrane adsorbers are often pre-assembled by manufacturers as ready-to-use modules. In large-scale protein manufacturing settings, the use of multiple membrane modules for a single batch is often required due to the large quantity of feed material. The question as to how multiple modules can be connected to achieve optimum separation and productivity has been previously approached using model proteins and mass transport theories. In this study, we compare the performance of multiple membrane modules in series and in parallel in the production of a protein antigen. Series connection was shown to provide superior separation compared to parallel connection in the context of competitive adsorption. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Existence of k⁻¹ power-law scaling in the equilibrium regions of wall-bounded turbulence explained by Heisenberg's eddy viscosity.

    PubMed

    Katul, Gabriel G; Porporato, Amilcare; Nikora, Vladimir

    2012-12-01

    The existence of a "-1" power-law scaling at low wavenumbers in the longitudinal velocity spectrum of wall-bounded turbulence was explained by multiple mechanisms; however, experimental support has not been uniform across laboratory studies. This letter shows that Heisenberg's eddy viscosity approach can provide a theoretical framework that bridges these multiple mechanisms and explains the elusiveness of the "-1" power law in some experiments. Novel theoretical outcomes are conjectured about the role of intermittency and very-large scale motions in modifying the k⁻¹ scaling.

  14. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  15. From a meso- to micro-scale connectome: array tomography and mGRASP

    PubMed Central

    Rah, Jong-Cheol; Feng, Linqing; Druckmann, Shaul; Lee, Hojin; Kim, Jinhyun

    2015-01-01

    Mapping mammalian synaptic connectivity has long been an important goal of neuroscience because knowing how neurons and brain areas are connected underpins an understanding of brain function. Meeting this goal requires advanced techniques with single synapse resolution and large-scale capacity, especially at multiple scales tethering the meso- and micro-scale connectome. Among several advanced LM-based connectome technologies, Array Tomography (AT) and mammalian GFP-Reconstitution Across Synaptic Partners (mGRASP) can provide relatively high-throughput mapping synaptic connectivity at multiple scales. AT- and mGRASP-assisted circuit mapping (ATing and mGRASPing), combined with techniques such as retrograde virus, brain clearing techniques, and activity indicators will help unlock the secrets of complex neural circuits. Here, we discuss these useful new tools to enable mapping of brain circuits at multiple scales, some functional implications of spatial synaptic distribution, and future challenges and directions of these endeavors. PMID:26089781

  16. Comparison of WinSLAMM Modeled Results with Monitored Biofiltration Data

    EPA Science Inventory

    The US EPA’s Green Infrastructure Demonstration project in Kansas City incorporates both small scale individual biofiltration device monitoring, along with large scale watershed monitoring. The test watershed (100 acres) is saturated with green infrastructure components (includin...

  17. Applying Multidimensional Item Response Theory Models in Validating Test Dimensionality: An Example of K-12 Large-Scale Science Assessment

    ERIC Educational Resources Information Center

    Li, Ying; Jiao, Hong; Lissitz, Robert W.

    2012-01-01

    This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…

  18. An Illustrative Guide to the Minerva Framework

    NASA Astrophysics Data System (ADS)

    Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration

    2017-10-01

    Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.

  19. Functional Genomic Landscape of Human Breast Cancer Drivers, Vulnerabilities, and Resistance.

    PubMed

    Marcotte, Richard; Sayad, Azin; Brown, Kevin R; Sanchez-Garcia, Felix; Reimand, Jüri; Haider, Maliha; Virtanen, Carl; Bradner, James E; Bader, Gary D; Mills, Gordon B; Pe'er, Dana; Moffat, Jason; Neel, Benjamin G

    2016-01-14

    Large-scale genomic studies have identified multiple somatic aberrations in breast cancer, including copy number alterations and point mutations. Still, identifying causal variants and emergent vulnerabilities that arise as a consequence of genetic alterations remain major challenges. We performed whole-genome small hairpin RNA (shRNA) "dropout screens" on 77 breast cancer cell lines. Using a hierarchical linear regression algorithm to score our screen results and integrate them with accompanying detailed genetic and proteomic information, we identify vulnerabilities in breast cancer, including candidate "drivers," and reveal general functional genomic properties of cancer cells. Comparisons of gene essentiality with drug sensitivity data suggest potential resistance mechanisms, effects of existing anti-cancer drugs, and opportunities for combination therapy. Finally, we demonstrate the utility of this large dataset by identifying BRD4 as a potential target in luminal breast cancer and PIK3CA mutations as a resistance determinant for BET-inhibitors. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    PubMed Central

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  1. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    NASA Astrophysics Data System (ADS)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  2. Quantification of Treatment Effect Modification on Both an Additive and Multiplicative Scale

    PubMed Central

    Girerd, Nicolas; Rabilloud, Muriel; Pibarot, Philippe; Mathieu, Patrick; Roy, Pascal

    2016-01-01

    Background In both observational and randomized studies, associations with overall survival are by and large assessed on a multiplicative scale using the Cox model. However, clinicians and clinical researchers have an ardent interest in assessing absolute benefit associated with treatments. In older patients, some studies have reported lower relative treatment effect, which might translate into similar or even greater absolute treatment effect given their high baseline hazard for clinical events. Methods The effect of treatment and the effect modification of treatment were respectively assessed using a multiplicative and an additive hazard model in an analysis adjusted for propensity score in the context of coronary surgery. Results The multiplicative model yielded a lower relative hazard reduction with bilateral internal thoracic artery grafting in older patients (Hazard ratio for interaction/year = 1.03, 95%CI: 1.00 to 1.06, p = 0.05) whereas the additive model reported a similar absolute hazard reduction with increasing age (Delta for interaction/year = 0.10, 95%CI: -0.27 to 0.46, p = 0.61). The number needed to treat derived from the propensity score-adjusted multiplicative model was remarkably similar at the end of the follow-up in patients aged < = 60 and in patients >70. Conclusions The present example demonstrates that a lower treatment effect in older patients on a relative scale can conversely translate into a similar treatment effect on an additive scale due to large baseline hazard differences. Importantly, absolute risk reduction, either crude or adjusted, can be calculated from multiplicative survival models. We advocate for a wider use of the absolute scale, especially using additive hazard models, to assess treatment effect and treatment effect modification. PMID:27045168

  3. Orthographic and Phonological Neighborhood Databases across Multiple Languages.

    PubMed

    Marian, Viorica

    2017-01-01

    The increased globalization of science and technology and the growing number of bilinguals and multilinguals in the world have made research with multiple languages a mainstay for scholars who study human function and especially those who focus on language, cognition, and the brain. Such research can benefit from large-scale databases and online resources that describe and measure lexical, phonological, orthographic, and semantic information. The present paper discusses currently-available resources and underscores the need for tools that enable measurements both within and across multiple languages. A general review of language databases is followed by a targeted introduction to databases of orthographic and phonological neighborhoods. A specific focus on CLEARPOND illustrates how databases can be used to assess and compare neighborhood information across languages, to develop research materials, and to provide insight into broad questions about language. As an example of how using large-scale databases can answer questions about language, a closer look at neighborhood effects on lexical access reveals that not only orthographic, but also phonological neighborhoods can influence visual lexical access both within and across languages. We conclude that capitalizing upon large-scale linguistic databases can advance, refine, and accelerate scientific discoveries about the human linguistic capacity.

  4. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  5. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less

  6. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    PubMed

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html.

  7. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing

    PubMed Central

    2013-01-01

    Background Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Results Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Conclusions Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html. PMID:23802613

  8. Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.

    PubMed

    Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G

    2017-04-07

    Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.

  9. Multi-color electron microscopy by element-guided identification of cells, organelles and molecules

    PubMed Central

    Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.

    2017-01-01

    Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351

  10. Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies

    PubMed Central

    Zhang, Yu; Liu, Jun S.

    2011-01-01

    Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  12. Scaling Fiber Lasers to Large Mode Area: An Investigation of Passive Mode-Locking Using a Multi-Mode Fiber

    PubMed Central

    Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W.

    2011-01-01

    The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg–Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers. PMID:21731106

  13. Scaling Fiber Lasers to Large Mode Area: An Investigation of Passive Mode-Locking Using a Multi-Mode Fiber.

    PubMed

    Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W

    2011-01-01

    The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg-Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers.

  14. Interactions of multi-scale heterogeneity in the lithosphere: Australia

    NASA Astrophysics Data System (ADS)

    Kennett, B. L. N.; Yoshizawa, K.; Furumura, T.

    2017-10-01

    Understanding the complex heterogeneity of the continental lithosphere involves a wide variety of spatial scales and the synthesis of multiple classes of information. Seismic surface waves and multiply reflected body waves provide the main constraints on broad-scale structure, and bounds on the extent of the lithosphere-asthenosphere transition (LAT) can be found from the vertical gradients of S wavespeed. Information on finer-scale structures comes through body wave studies, including detailed seismic tomography and P-wave reflectivity extracted from stacked autocorrelograms of continuous component records. With the inclusion of deterministic large-scale structure and realistic medium-scale stochastic features fine-scale variations are subdued. The resulting multi-scale heterogeneity model for the Australian region gives a good representation of the character of observed seismograms and their geographic variations and matches the observations of P-wave reflectivity. P reflections in the 0.5-3.0 Hz band in the uppermost mantle suggest variations on vertical scales of a few hundred metres with amplitudes of the order of 1%. Interference of waves reflected or converted at sequences of such modest variations in physical properties produce relatively simple behaviour for lower frequencies, which can suggest simpler structures than are actually present. Vertical changes in the character of fine-scale heterogeneity can produce apparent discontinuities. In Central Australia a 'mid-lithospheric discontinuity' can be tracked via changes in frequency content of station reflectivity, with links to the broad-scale pattern of wavespeed gradients and, in particular, the gradients of radial anisotropy. Comparisons with xenolith results from southeastern Australia indicate a strong tie between geochemical stratification and P-wave reflectivity.

  15. PLEXdb: Gene expression resources for plants and plant pathogens

    USDA-ARS?s Scientific Manuscript database

    PLEXdb (Plant Expression Database), in partnership with community databases, supports comparisons of gene expression across multiple plant and pathogen species, promoting individuals and/or consortia to upload genome-scale data sets to contrast them to previously archived data. These analyses facili...

  16. Pilot Comparison of Radiance Temperature Scale Realization Between NIMT and NMIJ

    NASA Astrophysics Data System (ADS)

    Keawprasert, T.; Yamada, Y.; Ishii, J.

    2015-03-01

    A pilot comparison of radiance temperature scale realizations between the National Institute of Metrology Thailand (NIMT) and the National Metrology Institute of Japan (NMIJ) was conducted. At the two national metrology institutes (NMIs), a 900 nm radiation thermometer, used as the transfer artifact, was calibrated by a means of a multiple fixed-point method using the fixed-point blackbody of Zn, Al, Ag, and Cu points, and by means of relative spectral responsivity measurements according to the International Temperature Scale of 1990 (ITS-90) definition. The Sakuma-Hattori equation is used for interpolating the radiance temperature scale between the four fixed points and also for extrapolating the ITS-90 temperature scale to 2000 C. This paper compares the calibration results in terms of fixed-point measurements, relative spectral responsivity, and finally the radiance temperature scale. Good agreement for the fixed-point measurements was found in case a correction for the change of the internal temperature of the artifact was applied using the temperature coefficient measured at the NMIJ. For the realized radiance temperature range from 400 C to 1100 C, the resulting scale differences between the two NMIs are well within the combined scale comparison uncertainty of 0.12 C (). The resulting spectral responsivity measured at the NIMT has a comparable curve to that measured at the NMIJ especially in the out-of-band region, yielding a ITS-90 scale difference within 1.0 C from the Cu point to 2000 C, whereas the realization comparison uncertainty of NIMT and NMIJ combined is 1.2 C () at 2000 C.

  17. Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis

    NASA Astrophysics Data System (ADS)

    Zhou, Naiyun; Yu, Xiaxia; Zhao, Tianhao; Wen, Si; Wang, Fusheng; Zhu, Wei; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel; Gao, Yi

    2017-03-01

    Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.

  18. Whole-Genome Comparison Reveals Novel Genetic Elements That Characterize the Genome of Industrial Strains of Saccharomyces cerevisiae

    PubMed Central

    Borneman, Anthony R.; Desany, Brian A.; Riches, David; Affourtit, Jason P.; Forgan, Angus H.; Pretorius, Isak S.; Egholm, Michael; Chambers, Paul J.

    2011-01-01

    Human intervention has subjected the yeast Saccharomyces cerevisiae to multiple rounds of independent domestication and thousands of generations of artificial selection. As a result, this species comprises a genetically diverse collection of natural isolates as well as domesticated strains that are used in specific industrial applications. However the scope of genetic diversity that was captured during the domesticated evolution of the industrial representatives of this important organism remains to be determined. To begin to address this, we have produced whole-genome assemblies of six commercial strains of S. cerevisiae (four wine and two brewing strains). These represent the first genome assemblies produced from S. cerevisiae strains in their industrially-used forms and the first high-quality assemblies for S. cerevisiae strains used in brewing. By comparing these sequences to six existing high-coverage S. cerevisiae genome assemblies, clear signatures were found that defined each industrial class of yeast. This genetic variation was comprised of both single nucleotide polymorphisms and large-scale insertions and deletions, with the latter often being associated with ORF heterogeneity between strains. This included the discovery of more than twenty probable genes that had not been identified previously in the S. cerevisiae genome. Comparison of this large number of S. cerevisiae strains also enabled the characterization of a cluster of five ORFs that have integrated into the genomes of the wine and bioethanol strains on multiple occasions and at diverse genomic locations via what appears to involve the resolution of a circular DNA intermediate. This work suggests that, despite the scrutiny that has been directed at the yeast genome, there remains a significant reservoir of ORFs and novel modes of genetic transmission that may have significant phenotypic impact in this important model and industrial species. PMID:21304888

  19. An NCME Instructional Module on Booklet Designs in Large-Scale Assessments of Student Achievement: Theory and Practice

    ERIC Educational Resources Information Center

    Frey, Andreas; Hartig, Johannes; Rupp, Andre A.

    2009-01-01

    In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…

  20. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  1. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  2. A successful trap design for capturing large terrestrial snakes

    Treesearch

    Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer

    2005-01-01

    Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...

  3. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories

    NASA Astrophysics Data System (ADS)

    Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  4. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.

    PubMed

    Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  5. Weighted mining of massive collections of [Formula: see text]-values by convex optimization.

    PubMed

    Dobriban, Edgar

    2018-06-01

    Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).

  6. Methane eddy covariance flux measurements from a low flying aircraft: Bridging the scale gap between local and regional emissions estimates

    NASA Astrophysics Data System (ADS)

    Sayres, D. S.; Dobosy, R.; Dumas, E. J.; Kochendorfer, J.; Wilkerson, J.; Anderson, J. G.

    2017-12-01

    The Arctic contains a large reservoir of organic matter stored in permafrost and clathrates. Varying geology and hydrology across the Arctic, even on small scales, can cause large variability in surface carbon fluxes and partitioning between methane and carbon dioxide. This makes upscaling from point source measurements such as small flux towers or chambers difficult. Ground based measurements can yield high temporal resolution and detailed information about a specific location, but due to the inaccessibility of most of the Arctic to date have only made measurements at very few sites. In August 2013, a small aircraft, flying low over the surface (5-30 m), and carrying an air turbulence probe and spectroscopic instruments to measure methane, carbon dioxide, nitrous oxide, water vapor and their isotopologues, flew over the North Slope of Alaska. During the six flights multiple comparisons were made with a ground based Eddy Covariance tower as well as three region surveys flights of fluxes over three areas each approximately 2500 km2. We present analysis using the Flux Fragment Method and surface landscape classification maps to relate the fluxes to different surface land types. We show examples of how we use the aircraft data to upscale from a eddy covariance tower and map spatial variability across different ecotopes.

  7. Population-wide mortality in multiple forest types in western North America: onset, extent, and severity of impacts as indicators of climatic influence

    Treesearch

    J. D. Shaw; J. N. Long; M. T. Thompson; R. J. DeRose

    2010-01-01

    A complex of drought, insects, and disease is causing widespread mortality in multiple forest types across western North America. These forest types range from dry Pinus-Juniperus woodlands to moist, montane Picea-Abies forests. Although large-scale mortality events are known from the past and considered part of natural cycles, recent events have largely been...

  8. Rural-Urban Comparisons of Nursing Home Residents With Multiple Sclerosis

    ERIC Educational Resources Information Center

    Buchanan, Robert J.; Wang, Suojin; Zhu,Li; Kim, MyungSuk

    2004-01-01

    Multiple sclerosis (MS) is the most common neurologic disease that disables younger adults, affecting as many as 350,000 Americans. Purpose: The objectives of this study are to develop profiles of nursing home residents with MS from rural areas and compare them to residents with MS who lived in urban areas, suburban areas, and large towns.…

  9. Energetic basis for the molecular-scale organization of bone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Jinhui; Battle, Keith C.; Pan, Haihua

    2014-12-24

    The remarkable properties of bone derive from a highly organized arrangement of co-aligned nm-scale apatite platelets within a fibrillar collagen matrix. The origin of this arrangement is poorly understood and the crystal structures of hydroxyapatite (HAP) and the non-mineralized collagen fibrils alone do not provide an explanation. Moreover, little is known about collagen-apatite interaction energies, which should strongly influence both the molecular-scale organization and the resulting mechanical properties of the composite. We investigated collagen-mineral interactions by combining dynamic force spectroscopy (DFS) measurements of binding energies with molecular dynamics (MD) simulations of binding and AFM observations of collagen adsorption on singlemore » crystals of calcium phosphate for four mineral phases of potential importance in bone formation. In all cases, we observe a strong preferential orientation of collagen binding, but comparison between the observed orientations and TEM analyses native tissues shows only calcium-deficient apatite (CDAP) provides an interface with collagen that is consistent with both. MD simulations predict preferred collagen orientations that agree with observations and results from both MD and DFS reveal large values for the binding energy due to multiple binding sites. These findings reconcile apparent contradictions inherent in a hydroxyapatite or carbonated apatite (CAP) model of bone mineral and provide an energetic rationale for the molecular scale organization of bone.« less

  10. Anomalous scaling of passive scalar fields advected by the Navier-Stokes velocity ensemble: effects of strong compressibility and large-scale anisotropy.

    PubMed

    Antonov, N V; Kostenko, M M

    2014-12-01

    The field theoretic renormalization group and the operator product expansion are applied to two models of passive scalar quantities (the density and the tracer fields) advected by a random turbulent velocity field. The latter is governed by the Navier-Stokes equation for compressible fluid, subject to external random force with the covariance ∝δ(t-t')k(4-d-y), where d is the dimension of space and y is an arbitrary exponent. The original stochastic problems are reformulated as multiplicatively renormalizable field theoretic models; the corresponding renormalization group equations possess infrared attractive fixed points. It is shown that various correlation functions of the scalar field, its powers and gradients, demonstrate anomalous scaling behavior in the inertial-convective range already for small values of y. The corresponding anomalous exponents, identified with scaling (critical) dimensions of certain composite fields ("operators" in the quantum-field terminology), can be systematically calculated as series in y. The practical calculation is performed in the leading one-loop approximation, including exponents in anisotropic contributions. It should be emphasized that, in contrast to Gaussian ensembles with finite correlation time, the model and the perturbation theory presented here are manifestly Galilean covariant. The validity of the one-loop approximation and comparison with Gaussian models are briefly discussed.

  11. Energetic basis for the molecular-scale organization of bone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Jinhui; Battle, Keith C.; Pan, Haihua

    The remarkable properties of bone derive from a highly organized arrangement of co-aligned nm-scale apatite platelets within a fibrillar collagen matrix. The origin of this arrangement is poorly understood and the crystal structures of hydroxyapatite (HAP) and the non-mineralized collagen fibrils alone do not provide an explanation. Moreover, little is known about collagen-apatite interaction energies, which should strongly influence both the molecular-scale organization and the resulting mechanical properties of the composite. We investigated collagen-mineral interactions by combining dynamic force spectroscopy (DFS) measurements of binding energies with molecular dynamics (MD) simulations of binding and AFM observations of collagen adsorption on singlemore » crystals of calcium phosphate for four mineral phases of potential importance in bone formation. In all cases, we observe a strong preferential orientation of collagen binding, but comparison between the observed orientations and TEM analyses native tissues shows only calcium-deficient apatite (CDAP) provides an interface with collagen that is consistent with both. MD simulations predict preferred collagen orientations that agree with observations and results from both MD and DFS reveal large values for the binding energy due to multiple binding sites. These findings reconcile apparent contradictions inherent in a hydroxyapatite or carbonated apatite (CAP) model of bone mineral and provide an energetic rationale for the molecular scale organization of bone.« less

  12. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    NASA Astrophysics Data System (ADS)

    Mantry, Sonny; Petriello, Frank

    2010-05-01

    We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.

  13. Biotic and abiotic variables influencing plant litter breakdown in streams: a global study.

    PubMed

    Boyero, Luz; Pearson, Richard G; Hui, Cang; Gessner, Mark O; Pérez, Javier; Alexandrou, Markos A; Graça, Manuel A S; Cardinale, Bradley J; Albariño, Ricardo J; Arunachalam, Muthukumarasamy; Barmuta, Leon A; Boulton, Andrew J; Bruder, Andreas; Callisto, Marcos; Chauvet, Eric; Death, Russell G; Dudgeon, David; Encalada, Andrea C; Ferreira, Verónica; Figueroa, Ricardo; Flecker, Alexander S; Gonçalves, José F; Helson, Julie; Iwata, Tomoya; Jinggut, Tajang; Mathooko, Jude; Mathuriau, Catherine; M'Erimba, Charles; Moretti, Marcelo S; Pringle, Catherine M; Ramírez, Alonso; Ratnarajah, Lavenia; Rincon, José; Yule, Catherine M

    2016-04-27

    Plant litter breakdown is a key ecological process in terrestrial and freshwater ecosystems. Streams and rivers, in particular, contribute substantially to global carbon fluxes. However, there is little information available on the relative roles of different drivers of plant litter breakdown in fresh waters, particularly at large scales. We present a global-scale study of litter breakdown in streams to compare the roles of biotic, climatic and other environmental factors on breakdown rates. We conducted an experiment in 24 streams encompassing latitudes from 47.8° N to 42.8° S, using litter mixtures of local species differing in quality and phylogenetic diversity (PD), and alder (Alnus glutinosa) to control for variation in litter traits. Our models revealed that breakdown of alder was driven by climate, with some influence of pH, whereas variation in breakdown of litter mixtures was explained mainly by litter quality and PD. Effects of litter quality and PD and stream pH were more positive at higher temperatures, indicating that different mechanisms may operate at different latitudes. These results reflect global variability caused by multiple factors, but unexplained variance points to the need for expanded global-scale comparisons. © 2016 The Author(s).

  14. Biotic and abiotic variables influencing plant litter breakdown in streams: a global study

    PubMed Central

    Pearson, Richard G.; Hui, Cang; Gessner, Mark O.; Pérez, Javier; Alexandrou, Markos A.; Graça, Manuel A. S.; Cardinale, Bradley J.; Albariño, Ricardo J.; Arunachalam, Muthukumarasamy; Barmuta, Leon A.; Boulton, Andrew J.; Bruder, Andreas; Callisto, Marcos; Chauvet, Eric; Death, Russell G.; Dudgeon, David; Encalada, Andrea C.; Ferreira, Verónica; Figueroa, Ricardo; Flecker, Alexander S.; Gonçalves, José F.; Helson, Julie; Iwata, Tomoya; Jinggut, Tajang; Mathooko, Jude; Mathuriau, Catherine; M'Erimba, Charles; Moretti, Marcelo S.; Pringle, Catherine M.; Ramírez, Alonso; Ratnarajah, Lavenia; Rincon, José; Yule, Catherine M.

    2016-01-01

    Plant litter breakdown is a key ecological process in terrestrial and freshwater ecosystems. Streams and rivers, in particular, contribute substantially to global carbon fluxes. However, there is little information available on the relative roles of different drivers of plant litter breakdown in fresh waters, particularly at large scales. We present a global-scale study of litter breakdown in streams to compare the roles of biotic, climatic and other environmental factors on breakdown rates. We conducted an experiment in 24 streams encompassing latitudes from 47.8° N to 42.8° S, using litter mixtures of local species differing in quality and phylogenetic diversity (PD), and alder (Alnus glutinosa) to control for variation in litter traits. Our models revealed that breakdown of alder was driven by climate, with some influence of pH, whereas variation in breakdown of litter mixtures was explained mainly by litter quality and PD. Effects of litter quality and PD and stream pH were more positive at higher temperatures, indicating that different mechanisms may operate at different latitudes. These results reflect global variability caused by multiple factors, but unexplained variance points to the need for expanded global-scale comparisons. PMID:27122551

  15. Navier-Stokes solutions of unsteady separation induced by a vortex: Comparison with theory and influence of a moving wall

    NASA Astrophysics Data System (ADS)

    Obabko, Aleksandr Vladimirovich

    Numerical solutions of the unsteady Navier-Stokes equations are considered for the flow induced by a thick-core vortex convecting along an infinite surface in a two-dimensional incompressible flow. The formulation is considered as a model problem of the dynamic-stall vortex and is relevant to other unsteady separation phenomena including vorticity ejections in juncture flows and the vorticity production mechanism in turbulent boundary-layers. Induced by an adverse streamwise pressure gradient due to the presence of the vortex above the wall, a primary recirculation region forms and evolves toward a singular solution of the unsteady non-interacting boundary-layer equations. The resulting eruptive spike provokes a small-scale viscous-inviscid interaction in the high-Reynolds-number regime. In the moderate-Reynolds-numbers regime, the growing recirculation region initiates a large-scale interaction in the form of local changes in the streamwise pressure gradient accelerating the spike formation and resulting small-scale interaction through development of a region of streamwise compression. It also was found to induce regions of streamwise expansion and "child" recirculation regions that contribute to ejections of near-wall vorticity and splitting of the "parent" region into multiple co-rotating eddies. These eddies later merge into a single amalgamated eddy that is observed to pair with the detaching vortex similar to the low-Reynolds-number regime where the large-scale interaction occurs, but there is no spike or subsequent small-scale interaction. It is also found that increasing the wall speed or vortex convection velocity toward a critical value results in solutions that are indicative of flows at lower Reynolds numbers eventually leading to suppression of unsteady separation and vortex detachment processes.

  16. Large-Scale Assessment of Change in Student Achievement: Dutch Primary School Students' Results on Written Division in 1997 and 2004 as an Example

    ERIC Educational Resources Information Center

    van den Heuvel-Panhuizen, Marja; Robitzsch, Alexander; Treffers, Adri; Koller, Olaf

    2009-01-01

    This article discusses large-scale assessment of change in student achievement and takes the study by Hickendorff, Heiser, Van Putten, and Verhelst (2009) as an example. This study compared the achievement of students in the Netherlands in 1997 and 2004 on written division problems. Based on this comparison, they claim that there is a performance…

  17. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  18. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  19. The influence of large-scale wind power on global climate.

    PubMed

    Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J

    2004-11-16

    Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.

  20. Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe

    NASA Astrophysics Data System (ADS)

    Pinto, Joaquim G.; Gómara, Iñigo; Masato, Giacomo; Dacre, Helen F.; Woollings, Tim; Caballero, Rodrigo

    2015-04-01

    Some recent winters in Western Europe have been characterized by the occurrence of multiple extratropical cyclones following a similar path. The occurrence of such cyclone clusters leads to large socio-economic impacts due to damaging winds, storm surges, and floods. Recent studies have statistically characterized the clustering of extratropical cyclones over the North Atlantic and Europe and hypothesized potential physical mechanisms responsible for their formation. Here we analyze 4 months characterized by multiple cyclones over Western Europe (February 1990, January 1993, December 1999, and January 2007). The evolution of the eddy driven jet stream, Rossby wave-breaking, and upstream/downstream cyclone development are investigated to infer the role of the large-scale flow and to determine if clustered cyclones are related to each other. Results suggest that optimal conditions for the occurrence of cyclone clusters are provided by a recurrent extension of an intensified eddy driven jet toward Western Europe lasting at least 1 week. Multiple Rossby wave-breaking occurrences on both the poleward and equatorward flanks of the jet contribute to the development of these anomalous large-scale conditions. The analysis of the daily weather charts reveals that upstream cyclone development (secondary cyclogenesis, where new cyclones are generated on the trailing fronts of mature cyclones) is strongly related to cyclone clustering, with multiple cyclones developing on a single jet streak. The present analysis permits a deeper understanding of the physical reasons leading to the occurrence of cyclone families over the North Atlantic, enabling a better estimation of the associated cumulative risk over Europe.

  1. Field significance of performance measures in the context of regional climate model evaluation. Part 2: precipitation

    NASA Astrophysics Data System (ADS)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is distribution-free, robust to spatial dependence, and accounts for time series structure.

  2. Effects of multiple-scale driving on turbulence statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Hyunju; Cho, Jungyeon, E-mail: hyunju527@gmail.com, E-mail: jcho@cnu.ac.kr

    2014-01-01

    Turbulence is ubiquitous in astrophysical fluids such as the interstellar medium and the intracluster medium. In turbulence studies, it is customary to assume that fluid is driven on a single scale. However, in astrophysical fluids, there can be many different driving mechanisms that act on different scales. If there are multiple energy-injection scales, the process of energy cascade and turbulence dynamo will be different compared with the case of the single energy-injection scale. In this work, we perform three-dimensional incompressible/compressible magnetohydrodynamic turbulence simulations. We drive turbulence in Fourier space in two wavenumber ranges, 2≤k≤√12 (large scale) and 15 ≲ kmore » ≲ 26 (small scale). We inject different amount of energy in each range by changing the amplitude of forcing in the range. We present the time evolution of the kinetic and magnetic energy densities and discuss the turbulence dynamo in the presence of energy injections at two scales. We show how kinetic, magnetic, and density spectra are affected by the two-scale energy injections and we discuss the observational implications. In the case ε {sub L} < ε {sub S}, where ε {sub L} and ε {sub S} are energy-injection rates at the large and small scales, respectively, our results show that even a tiny amount of large-scale energy injection can significantly change the properties of turbulence. On the other hand, when ε {sub L} ≳ ε {sub S}, the small-scale driving does not influence the turbulence statistics much unless ε {sub L} ∼ ε {sub S}.« less

  3. Cognitive functioning and disturbances of mood in UK veterans of the Persian Gulf War: a comparative study.

    PubMed

    David, A S; Farrin, L; Hull, L; Unwin, C; Wessely, S; Wykes, T

    2002-11-01

    Complaints of poor memory and concentration are common in veterans of the 1991 Persian Gulf War as are other symptoms. Despite a large research effort, such symptoms remain largely unexplained. A comprehensive battery of neuropsychological tests and rating scales was administered to 341 UK servicemen who were returnees from the Gulf War and peace keeping duties in Bosnia, plus non-deployed military controls. All were drawn from a large randomized survey. Most were selected on the basis of impaired physical functioning defined operationally. Group comparisons revealed an association between physical functioning and symptoms of depression, post-traumatic stress reactions, increased anger and subjective cognitive failures. Poorer performance on some general cognitive measures, sequencing and attention was also seen in association with being 'ill' but virtually all differences disappeared after adjusting for depressed mood or multiple comparisons. Deployment was also associated with symptoms of post-traumatic stress and subjective cognitive failures, independently of health status, as well as minor general cognitive and constructional impairment. The latter remained significantly poorer in the Gulf group even after adjusting for depressed mood. Disturbances of mood are more prominent than quantifiable cognitive deficits in Gulf War veterans and probably lead to subjective underestimation of ability. Task performance deficits can themselves be explained by depressed mood although the direction of causality cannot be inferred confidently. Reduced constructional ability cannot be explained in this way and could be an effect of Gulf-specific exposures.

  4. On the large scale structure of X-ray background sources

    NASA Technical Reports Server (NTRS)

    Bi, H. G.; Meszaros, A.; Meszaros, P.

    1991-01-01

    The large scale clustering of the sources responsible for the X-ray background is discussed, under the assumption of a discrete origin. The formalism necessary for calculating the X-ray spatial fluctuations in the most general case where the source density contrast in structures varies with redshift is developed. A comparison of this with observational limits is useful for obtaining information concerning various galaxy formation scenarios. The calculations presented show that a varying density contrast has a small impact on the expected X-ray fluctuations. This strengthens and extends previous conclusions concerning the size and comoving density of large scale structures at redshifts 0.5 between 4.0.

  5. A field comparison of multiple techniques to quantify groundwater - surface-water interactions

    USGS Publications Warehouse

    González-Pinzón, Ricardo; Ward, Adam S; Hatch, Christine E; Wlostowski, Adam N; Singha, Kamini; Gooseff, Michael N.; Haggerty, Roy; Harvey, Judson; Cirpka, Olaf A; Brock, James T

    2015-01-01

    Groundwater–surface-water (GW-SW) interactions in streams are difficult to quantify because of heterogeneity in hydraulic and reactive processes across a range of spatial and temporal scales. The challenge of quantifying these interactions has led to the development of several techniques, from centimeter-scale probes to whole-system tracers, including chemical, thermal, and electrical methods. We co-applied conservative and smart reactive solute-tracer tests, measurement of hydraulic heads, distributed temperature sensing, vertical profiles of solute tracer and temperature in the stream bed, and electrical resistivity imaging in a 450-m reach of a 3rd-order stream. GW-SW interactions were not spatially expansive, but were high in flux through a shallow hyporheic zone surrounding the reach. NaCl and resazurin tracers suggested different surface–subsurface exchange patterns in the upper ⅔ and lower ⅓ of the reach. Subsurface sampling of tracers and vertical thermal profiles quantified relatively high fluxes through a 10- to 20-cm deep hyporheic zone with chemical reactivity of the resazurin tracer indicated at 3-, 6-, and 9-cm sampling depths. Monitoring of hydraulic gradients along transects with MINIPOINT streambed samplers starting ∼40 m from the stream indicated that groundwater discharge prevented development of a larger hyporheic zone, which progressively decreased from the stream thalweg toward the banks. Distributed temperature sensing did not detect extensive inflow of ground water to the stream, and electrical resistivity imaging showed limited large-scale hyporheic exchange. We recommend choosing technique(s) based on: 1) clear definition of the questions to be addressed (physical, biological, or chemical processes), 2) explicit identification of the spatial and temporal scales to be covered and those required to provide an appropriate context for interpretation, and 3) maximizing generation of mechanistic understanding and reducing costs of implementing multiple techniques through collaborative research.

  6. Raccoon spatial requirements and multi-scale habitat selection within an intensively managed central Appalachian forest

    USGS Publications Warehouse

    Owen, Sheldon F.; Berl, Jacob L.; Edwards, John W.; Ford, W. Mark; Wood, Petra Bohall

    2015-01-01

    We studied a raccoon (Procyon lotor) population within a managed central Appalachian hardwood forest in West Virginia to investigate the effects of intensive forest management on raccoon spatial requirements and habitat selection. Raccoon home-range (95% utilization distribution) and core-area (50% utilization distribution) size differed between sexes with males maintaining larger (2×) home ranges and core areas than females. Home-range and core-area size did not differ between seasons for either sex. We used compositional analysis to quantify raccoon selection of six different habitat types at multiple spatial scales. Raccoons selected riparian corridors (riparian management zones [RMZ]) and intact forests (> 70 y old) at the core-area spatial scale. RMZs likely were used by raccoons because they provided abundant denning resources (i.e., large-diameter trees) as well as access to water. Habitat composition associated with raccoon foraging locations indicated selection for intact forests, riparian areas, and regenerating harvest (stands <10 y old). Although raccoons were able to utilize multiple habitat types for foraging resources, a selection of intact forest and RMZs at multiple spatial scales indicates the need of mature forest (with large-diameter trees) for this species in managed forests in the central Appalachians.

  7. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  8. Comparison of Penalty Functions for Sparse Canonical Correlation Analysis

    PubMed Central

    Chalise, Prabhakar; Fridley, Brooke L.

    2011-01-01

    Canonical correlation analysis (CCA) is a widely used multivariate method for assessing the association between two sets of variables. However, when the number of variables far exceeds the number of subjects, such in the case of large-scale genomic studies, the traditional CCA method is not appropriate. In addition, when the variables are highly correlated the sample covariance matrices become unstable or undefined. To overcome these two issues, sparse canonical correlation analysis (SCCA) for multiple data sets has been proposed using a Lasso type of penalty. However, these methods do not have direct control over sparsity of solution. An additional step that uses Bayesian Information Criterion (BIC) has also been suggested to further filter out unimportant features. In this paper, a comparison of four penalty functions (Lasso, Elastic-net, SCAD and Hard-threshold) for SCCA with and without the BIC filtering step have been carried out using both real and simulated genotypic and mRNA expression data. This study indicates that the SCAD penalty with BIC filter would be a preferable penalty function for application of SCCA to genomic data. PMID:21984855

  9. LES-based filter-matrix lattice Boltzmann model for simulating fully developed turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Zhuo, Congshan; Zhong, Chengwen

    2016-11-01

    In this paper, a three-dimensional filter-matrix lattice Boltzmann (FMLB) model based on large eddy simulation (LES) was verified for simulating wall-bounded turbulent flows. The Vreman subgrid-scale model was employed in the present FMLB-LES framework, which had been proved to be capable of predicting turbulent near-wall region accurately. The fully developed turbulent channel flows were performed at a friction Reynolds number Reτ of 180. The turbulence statistics computed from the present FMLB-LES simulations, including mean stream velocity profile, Reynolds stress profile and root-mean-square velocity fluctuations greed well with the LES results of multiple-relaxation-time (MRT) LB model, and some discrepancies in comparison with those direct numerical simulation (DNS) data of Kim et al. was also observed due to the relatively low grid resolution. Moreover, to investigate the influence of grid resolution on the present LES simulation, a DNS simulation on a finer gird was also implemented by present FMLB-D3Q19 model. Comparisons of detailed computed various turbulence statistics with available benchmark data of DNS showed quite well agreement.

  10. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Relationships among measures of managerial personality traits.

    PubMed

    Miner, J B

    1976-08-01

    Comparisons were made to determine the degree of convergence among three measures associated with leadership success in large, hierarchic organizations in the business sector: the Miner Sentence Completion Scale; the Ghiselli Self-Description Inventory; and the F-Scale, Correlational analyses and comparisons between means were made using college students and business manager samples. The results indicated considerable convergence for the first two measures, but not for the F-Scale. The F-Scale was related to the Miner Sentence Completion Scale in the student group, but relationships were nonexistent among the managers. Analyses of the individual F-Scale items which produced the relationship among the students suggested that early family-related experiences and attitudes may contribute to the development of motivation to manage, but lose their relevance for it later, under the onslaught of actual managerial experience.

  12. Connecting the large- and the small-scale magnetic fields of solar-like stars

    NASA Astrophysics Data System (ADS)

    Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.

    2018-05-01

    A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.

  13. A HIERARCHIAL STOCHASTIC MODEL OF LARGE SCALE ATMOSPHERIC CIRCULATION PATTERNS AND MULTIPLE STATION DAILY PRECIPITATION

    EPA Science Inventory

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...

  14. Safety-relevant hydrogeological properties of the claystone barrier of a Swiss radioactive waste repository: An evaluation using multiple lines of evidence

    NASA Astrophysics Data System (ADS)

    Gautschi, Andreas

    2017-09-01

    In Switzerland, the Opalinus Clay - a Jurassic (Aalenian) claystone formation - has been proposed as the first-priority host rock for a deep geological repository for both low- and intermediate-level and high-level radioactive wastes. An extensive site and host rock investigation programme has been carried out during the past 30 years in Northern Switzerland, comprising extensive 2D and 3D seismic surveys, a series of deep boreholes within and around potential geological siting regions, experiments in the international Mont Terri Rock Laboratory, compilations of data from Opalinus Clay in railway and motorway tunnels and comparisons with similar rocks. The hydrogeological properties of the Opalinus Clay that are relevant from the viewpoint of long-term safety are described and illustrated. The main conclusions are supported by multiple lines of evidence, demonstrating consistency of conclusions based on hydraulic properties, porewater chemistry, distribution of natural tracers across the Opalinus Clay as well as small- and large-scale diffusion models and the derived conceptual understanding of solute transport.

  15. Experimental comparison of conventional and nonlinear model-based control of a mixing tank

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haeggblom, K.E.

    1993-11-01

    In this case study concerning control of a laboratory-scale mixing tank, conventional multiloop single-input single-output (SISO) control is compared with model-based'' control where the nonlinearity and multivariable characteristics of the process are explicitly taken into account. It is shown, especially if the operating range of the process is large, that the two outputs (level and temperature) cannot be adequately controlled by multiloop SISO control even if gain scheduling is used. By nonlinear multiple-input multiple-output (MIMO) control, on the other hand, very good control performance is obtained. The basic approach to nonlinear control used in this study is first to transformmore » the process into a globally linear and decoupled system, and then to design controllers for this system. Because of the properties of the resulting MIMO system, the controller design is very easy. Two nonlinear control system designs based on a steady-state and a dynamic model, respectively, are considered. In the dynamic case, both setpoint tracking and disturbance rejection can be addressed separately.« less

  16. The Prediction of Broadband Shock-Associated Noise Including Propagation Effects

    NASA Technical Reports Server (NTRS)

    Miller, Steven; Morris, Philip J.

    2011-01-01

    An acoustic analogy is developed based on the Euler equations for broadband shock- associated noise (BBSAN) that directly incorporates the vector Green's function of the linearized Euler equations and a steady Reynolds-Averaged Navier-Stokes solution (SRANS) as the mean flow. The vector Green's function allows the BBSAN propagation through the jet shear layer to be determined. The large-scale coherent turbulence is modeled by two-point second order velocity cross-correlations. Turbulent length and time scales are related to the turbulent kinetic energy and dissipation. An adjoint vector Green's function solver is implemented to determine the vector Green's function based on a locally parallel mean flow at streamwise locations of the SRANS solution. However, the developed acoustic analogy could easily be based on any adjoint vector Green's function solver, such as one that makes no assumptions about the mean flow. The newly developed acoustic analogy can be simplified to one that uses the Green's function associated with the Helmholtz equation, which is consistent with the formulation of Morris and Miller (AIAAJ 2010). A large number of predictions are generated using three different nozzles over a wide range of fully expanded Mach numbers and jet stagnation temperatures. These predictions are compared with experimental data from multiple jet noise labs. In addition, two models for the so-called 'fine-scale' mixing noise are included in the comparisons. Improved BBSAN predictions are obtained relative to other models that do not include the propagation effects, especially in the upstream direction of the jet.

  17. Selecting habitat to survive: the impact of road density on survival in a large carnivore.

    PubMed

    Basille, Mathieu; Van Moorter, Bram; Herfindal, Ivar; Martin, Jodie; Linnell, John D C; Odden, John; Andersen, Reidar; Gaillard, Jean-Michel

    2013-01-01

    Habitat selection studies generally assume that animals select habitat and food resources at multiple scales to maximise their fitness. However, animals sometimes prefer habitats of apparently low quality, especially when considering the costs associated with spatially heterogeneous human disturbance. We used spatial variation in human disturbance, and its consequences on lynx survival, a direct fitness component, to test the Hierarchical Habitat Selection hypothesis from a population of Eurasian lynx Lynx lynx in southern Norway. Data from 46 lynx monitored with telemetry indicated that a high proportion of forest strongly reduced the risk of mortality from legal hunting at the home range scale, while increasing road density strongly increased such risk at the finer scale within the home range. We found hierarchical effects of the impact of human disturbance, with a higher road density at a large scale reinforcing its negative impact at a fine scale. Conversely, we demonstrated that lynx shifted their habitat selection to avoid areas with the highest road densities within their home ranges, thus supporting a compensatory mechanism at fine scale enabling lynx to mitigate the impact of large-scale disturbance. Human impact, positively associated with high road accessibility, was thus a stronger driver of lynx space use at a finer scale, with home range characteristics nevertheless constraining habitat selection. Our study demonstrates the truly hierarchical nature of habitat selection, which aims at maximising fitness by selecting against limiting factors at multiple spatial scales, and indicates that scale-specific heterogeneity of the environment is driving individual spatial behaviour, by means of trade-offs across spatial scales.

  18. On the Interactions Between Planetary and Mesoscale Dynamics in the Oceans

    NASA Astrophysics Data System (ADS)

    Grooms, I.; Julien, K. A.; Fox-Kemper, B.

    2011-12-01

    Multiple-scales asymptotic methods are used to investigate the interaction of planetary and mesoscale dynamics in the oceans. We find three regimes. In the first, the slow, large-scale planetary flow sets up a baroclinically unstable background which leads to vigorous mesoscale eddy generation, but the eddy dynamics do not affect the planetary dynamics. In the second, the planetary flow feels the effects of the eddies, but appears to be unable to generate them. The first two regimes rely on horizontally isotropic large-scale dynamics. In the third regime, large-scale anisotropy, as exists for example in the Antarctic Circumpolar Current and in western boundary currents, allows the large-scale dynamics to both generate and respond to mesoscale eddies. We also discuss how the investigation may be brought to bear on the problem of parameterization of unresolved mesoscale dynamics in ocean general circulation models.

  19. Reanalysis comparisons of upper tropospheric-lower stratospheric jets and multiple tropopauses

    NASA Astrophysics Data System (ADS)

    Manney, Gloria L.; Hegglin, Michaela I.; Lawrence, Zachary D.; Wargan, Krzysztof; Millán, Luis F.; Schwartz, Michael J.; Santee, Michelle L.; Lambert, Alyn; Pawson, Steven; Knosp, Brian W.; Fuller, Ryan A.; Daffer, William H.

    2017-09-01

    The representation of upper tropospheric-lower stratospheric (UTLS) jet and tropopause characteristics is compared in five modern high-resolution reanalyses for 1980 through 2014. Climatologies of upper tropospheric jet, subvortex jet (the lowermost part of the stratospheric vortex), and multiple tropopause frequency distributions in MERRA (Modern-Era Retrospective analysis for Research and Applications), ERA-I (ERA-Interim; the European Centre for Medium-Range Weather Forecasts, ECMWF, interim reanalysis), JRA-55 (the Japanese 55-year Reanalysis), and CFSR (the Climate Forecast System Reanalysis) are compared with those in MERRA-2. Differences between alternate products from individual reanalysis systems are assessed; in particular, a comparison of CFSR data on model and pressure levels highlights the importance of vertical grid spacing. Most of the differences in distributions of UTLS jets and multiple tropopauses are consistent with the differences in assimilation model grids and resolution - for example, ERA-I (with coarsest native horizontal resolution) typically shows a significant low bias in upper tropospheric jets with respect to MERRA-2, and JRA-55 (the Japanese 55-year Reanalysis) a more modest one, while CFSR (with finest native horizontal resolution) shows a high bias with respect to MERRA-2 in both upper tropospheric jets and multiple tropopauses. Vertical temperature structure and grid spacing are especially important for multiple tropopause characterizations. Substantial differences between MERRA and MERRA-2 are seen in mid- to high-latitude Southern Hemisphere (SH) winter upper tropospheric jets and multiple tropopauses as well as in the upper tropospheric jets associated with tropical circulations during the solstice seasons; some of the largest differences from the other reanalyses are seen in the same times and places. Very good qualitative agreement among the reanalyses is seen between the large-scale climatological features in UTLS jet and multiple tropopause distributions. Quantitative differences may, however, have important consequences for transport and variability studies. Our results highlight the importance of considering reanalyses differences in UTLS studies, especially in relation to resolution and model grids; this is particularly critical when using high-resolution reanalyses as an observational reference for evaluating global chemistry-climate models.

  20. Posttraumatic headache: biopsychosocial comparisons with multiple control groups.

    PubMed

    Tatrow, Kristin; Blanchard, Edward B; Hickling, Edward J; Silverman, Daniel J

    2003-01-01

    This study examined somatic, psychological, and cognitive functioning of subjects with posttraumatic headache in comparison with multiple control groups. Posttraumatic headache is not as widely studied as other forms of headache (eg, tension-type, migraine). Previous research has suggested poor psychological functioning in patients with posttraumatic headache in comparison with other groups of patients with pain; however, this group has yet to be compared with a group of persons who have experienced trauma but are headache-free. Nineteen subjects with posttraumatic headache were studied, with full assessments available for 14 participants. Comparison groups, containing 16 participants each, included another headache group, a nonheadache group, and a trauma (motor vehicle accident) survivor nonheadache group. Participants completed several measures assessing somatic, psychological, and cognitive functioning. Findings revealed that the posttraumatic headache group exhibited significantly poorer functioning than the comparison groups on several measures including the Psychosomatic Symptom Checklist, Postconcussion Syndrome Checklist, axis II psychiatric diagnoses, Minnesota Multiphasic Personality Inventory, and the Daily Hassles Scale (frequency and total). Additionally, they scored higher on the following: number of axis I psychiatric diagnoses, the Daily Hassles Scale (intensity), Beck Depression Inventory, State-Trait Anxiety Inventory, and State-Trait Anger Expression Inventory. The posttraumatic headache group was similar to the other trauma group on the Posttraumatic Stress Disorder Symptom Checklist and the Life-Trauma Checklist. This study confirmed the distress seen in this understudied population of persons with headache and highlights areas of focus for proper assessment and treatment of those with headache and who have had an accident.

  1. Quantitative Serum Nuclear Magnetic Resonance Metabolomics in Large-Scale Epidemiology: A Primer on -Omic Technologies

    PubMed Central

    Kangas, Antti J; Soininen, Pasi; Lawlor, Debbie A; Davey Smith, George; Ala-Korpela, Mika

    2017-01-01

    Abstract Detailed metabolic profiling in large-scale epidemiologic studies has uncovered novel biomarkers for cardiometabolic diseases and clarified the molecular associations of established risk factors. A quantitative metabolomics platform based on nuclear magnetic resonance spectroscopy has found widespread use, already profiling over 400,000 blood samples. Over 200 metabolic measures are quantified per sample; in addition to many biomarkers routinely used in epidemiology, the method simultaneously provides fine-grained lipoprotein subclass profiling and quantification of circulating fatty acids, amino acids, gluconeogenesis-related metabolites, and many other molecules from multiple metabolic pathways. Here we focus on applications of magnetic resonance metabolomics for quantifying circulating biomarkers in large-scale epidemiology. We highlight the molecular characterization of risk factors, use of Mendelian randomization, and the key issues of study design and analyses of metabolic profiling for epidemiology. We also detail how integration of metabolic profiling data with genetics can enhance drug development. We discuss why quantitative metabolic profiling is becoming widespread in epidemiology and biobanking. Although large-scale applications of metabolic profiling are still novel, it seems likely that comprehensive biomarker data will contribute to etiologic understanding of various diseases and abilities to predict disease risks, with the potential to translate into multiple clinical settings. PMID:29106475

  2. Large-scale road safety programmes in low- and middle-income countries: an opportunity to generate evidence.

    PubMed

    Hyder, Adnan A; Allen, Katharine A; Peters, David H; Chandran, Aruna; Bishai, David

    2013-01-01

    The growing burden of road traffic injuries, which kill over 1.2 million people yearly, falls mostly on low- and middle-income countries (LMICs). Despite this, evidence generation on the effectiveness of road safety interventions in LMIC settings remains scarce. This paper explores a scientific approach for evaluating road safety programmes in LMICs and introduces such a road safety multi-country initiative, the Road Safety in 10 Countries Project (RS-10). By building on existing evaluation frameworks, we develop a scientific approach for evaluating large-scale road safety programmes in LMIC settings. This also draws on '13 lessons' of large-scale programme evaluation: defining the evaluation scope; selecting study sites; maintaining objectivity; developing an impact model; utilising multiple data sources; using multiple analytic techniques; maximising external validity; ensuring an appropriate time frame; the importance of flexibility and a stepwise approach; continuous monitoring; providing feedback to implementers, policy-makers; promoting the uptake of evaluation results; and understanding evaluation costs. The use of relatively new approaches for evaluation of real-world programmes allows for the production of relevant knowledge. The RS-10 project affords an important opportunity to scientifically test these approaches for a real-world, large-scale road safety evaluation and generate new knowledge for the field of road safety.

  3. A Global Evaluation of Coral Reef Management Performance: Are MPAs Producing Conservation and Socio-Economic Improvements?

    NASA Astrophysics Data System (ADS)

    Hargreaves-Allen, Venetia; Mourato, Susana; Milner-Gulland, Eleanor Jane

    2011-04-01

    There is a consensus that Marine Protected Area (MPA) performance needs regular evaluation against clear criteria, incorporating counterfactual comparisons of ecological and socio-economic performance. However, these evaluations are scarce at the global level. We compiled self-reports from managers and researchers of 78 coral reef-based MPAs world-wide, on the conservation and welfare improvements that their MPAs provide. We developed a suite of performance measures including fulfilment of design and management criteria, achievement of aims, the cessation of banned or destructive activities, change in threats, and measurable ecological and socio-economic changes in outcomes, which we evaluated with respect to the MPA's age, geographical location and main aims. The sample was found to be broadly representative of MPAs generally, and suggests that many MPAs do not achieve certain fundamental aims including improvements in coral cover over time (in 25% of MPAs), and conflict reduction (in 25%). However, the large majority demonstrated improvements in terms of slowing coral loss, reducing destructive uses and increasing tourism and local employment, despite many being small, underfunded and facing multiple large scale threats beyond the control of managers. However spatial comparisons suggest that in some regions MPAs are simply mirroring outside changes, with demonstrates the importance of testing for additionality. MPA benefits do not appear to increase linearly over time. In combination with other management efforts and regulations, especially those relating to large scale threat reduction and targeted fisheries and conflict resolution instruments, MPAs are an important tool to achieve coral reef conservation globally. Given greater resources and changes which incorporate best available science, such as larger MPAs and no-take areas, networks and reduced user pressure, it is likely that performance could further be enhanced. Performance evaluation should test for the generation of additional ecological and socio-economic improvements over time and compared to unmanaged areas as part of an adaptive management regime.

  4. Multimodal Investigation of Network Level Effects Using Intrinsic Functional Connectivity, Anatomical Covariance, and Structure-to-Function Correlations in Unmedicated Major Depressive Disorder

    PubMed Central

    Scheinost, Dustin; Holmes, Sophie E; DellaGioia, Nicole; Schleifer, Charlie; Matuskey, David; Abdallah, Chadi G; Hampson, Michelle; Krystal, John H; Anticevic, Alan; Esterlis, Irina

    2018-01-01

    Converging evidence suggests that major depressive disorder (MDD) affects multiple large-scale brain networks. Analyses of the correlation or covariance of regional brain structure and function applied to structural and functional MRI data may provide insights into systems-level organization and structure-to-function correlations in the brain in MDD. This study applied tensor-based morphometry and intrinsic connectivity distribution to identify regions of altered volume and intrinsic functional connectivity in data from unmedicated individuals with MDD (n=17) and healthy comparison participants (HC, n=20). These regions were then used as seeds for exploratory anatomical covariance and connectivity analyses. Reduction in volume in the anterior cingulate cortex (ACC) and lower structural covariance between the ACC and the cerebellum were observed in the MDD group. Additionally, individuals with MDD had significantly lower whole-brain intrinsic functional connectivity in the medial prefrontal cortex (mPFC). This mPFC region showed altered connectivity to the ventral lateral PFC (vlPFC) and local circuitry in MDD. Global connectivity in the ACC was negatively correlated with reported depressive symptomatology. The mPFC–vlPFC connectivity was positively correlated with depressive symptoms. Finally, we observed increased structure-to-function correlation in the PFC/ACC in the MDD group. Although across all analysis methods and modalities alterations in the PFC/ACC were a common finding, each modality and method detected alterations in subregions belonging to distinct large-scale brain networks. These exploratory results support the hypothesis that MDD is a systems level disorder affecting multiple brain networks located in the PFC and provide new insights into the pathophysiology of this disorder. PMID:28944772

  5. Multimodal Investigation of Network Level Effects Using Intrinsic Functional Connectivity, Anatomical Covariance, and Structure-to-Function Correlations in Unmedicated Major Depressive Disorder.

    PubMed

    Scheinost, Dustin; Holmes, Sophie E; DellaGioia, Nicole; Schleifer, Charlie; Matuskey, David; Abdallah, Chadi G; Hampson, Michelle; Krystal, John H; Anticevic, Alan; Esterlis, Irina

    2018-04-01

    Converging evidence suggests that major depressive disorder (MDD) affects multiple large-scale brain networks. Analyses of the correlation or covariance of regional brain structure and function applied to structural and functional MRI data may provide insights into systems-level organization and structure-to-function correlations in the brain in MDD. This study applied tensor-based morphometry and intrinsic connectivity distribution to identify regions of altered volume and intrinsic functional connectivity in data from unmedicated individuals with MDD (n=17) and healthy comparison participants (HC, n=20). These regions were then used as seeds for exploratory anatomical covariance and connectivity analyses. Reduction in volume in the anterior cingulate cortex (ACC) and lower structural covariance between the ACC and the cerebellum were observed in the MDD group. Additionally, individuals with MDD had significantly lower whole-brain intrinsic functional connectivity in the medial prefrontal cortex (mPFC). This mPFC region showed altered connectivity to the ventral lateral PFC (vlPFC) and local circuitry in MDD. Global connectivity in the ACC was negatively correlated with reported depressive symptomatology. The mPFC-vlPFC connectivity was positively correlated with depressive symptoms. Finally, we observed increased structure-to-function correlation in the PFC/ACC in the MDD group. Although across all analysis methods and modalities alterations in the PFC/ACC were a common finding, each modality and method detected alterations in subregions belonging to distinct large-scale brain networks. These exploratory results support the hypothesis that MDD is a systems level disorder affecting multiple brain networks located in the PFC and provide new insights into the pathophysiology of this disorder.

  6. Bioinspired Wood Nanotechnology for Functional Materials.

    PubMed

    Berglund, Lars A; Burgert, Ingo

    2018-05-01

    It is a challenging task to realize the vision of hierarchically structured nanomaterials for large-scale applications. Herein, the biomaterial wood as a large-scale biotemplate for functionalization at multiple scales is discussed, to provide an increased property range to this renewable and CO 2 -storing bioresource, which is available at low cost and in large quantities. The Progress Report reviews the emerging field of functional wood materials in view of the specific features of the structural template and novel nanotechnological approaches for the development of wood-polymer composites and wood-mineral hybrids for advanced property profiles and new functions. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Determining erosion relevant soil characteristics with a small-scale rainfall simulator

    NASA Astrophysics Data System (ADS)

    Schindewolf, M.; Schmidt, J.

    2009-04-01

    The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.

  8. A multiple-scales model of the shock-cell structure of imperfectly expanded supersonic jets

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Jackson, J. A.; Seiner, J. M.

    1985-01-01

    The present investigation is concerned with the development of an analytical model of the quasi-periodic shock-cell structure of an imperfectly expanded supersonic jet. The investigation represents a part of a program to develop a mathematical theory of broadband shock-associated noise of supersonic jets. Tam and Tanna (1982) have suggested that this type of noise is generated by the weak interaction between the quasi-periodic shock cells and the downstream-propagating large turbulence structures in the mixing layer of the jet. In the model developed in this paper, the effect of turbulence in the mixing layer of the jet is simulated by the addition of turbulent eddy-viscosity terms to the momentum equation. Attention is given to the mean-flow profile and the numerical solution, and a comparison of the numerical results with experimental data.

  9. A COMPARISON OF INTERCELL METRICS ON DISCRETE GLOBAL GRID SYSTEMS

    EPA Science Inventory

    A discrete global grid system (DGGS) is a spatial data model that aids in global research by serving as a framework for environmental modeling, monitoring and sampling across the earth at multiple spatial scales. Topological and geometric criteria have been proposed to evaluate a...

  10. Validity and reliability of a pilot scale for assessment of multiple system atrophy symptoms.

    PubMed

    Matsushima, Masaaki; Yabe, Ichiro; Takahashi, Ikuko; Hirotani, Makoto; Kano, Takahiro; Horiuchi, Kazuhiro; Houzen, Hideki; Sasaki, Hidenao

    2017-01-01

    Multiple system atrophy (MSA) is a rare progressive neurodegenerative disorder for which brief yet sensitive scale is required in order for use in clinical trials and general screening. We previously compared several scales for the assessment of MSA symptoms and devised an eight-item pilot scale with large standardized response mean [handwriting, finger taps, transfers, standing with feet together, turning trunk, turning 360°, gait, body sway]. The aim of the present study is to investigate the validity and reliability of a simple pilot scale for assessment of multiple system atrophy symptoms. Thirty-two patients with MSA (15 male/17 female; 20 cerebellar subtype [MSA-C]/12 parkinsonian subtype [MSA-P]) were prospectively registered between January 1, 2014 and February 28, 2015. Patients were evaluated by two independent raters using the Unified MSA Rating Scale (UMSARS), Scale for Assessment and Rating of Ataxia (SARA), and the pilot scale. Correlations between UMSARS, SARA, pilot scale scores, intraclass correlation coefficients (ICCs), and Cronbach's alpha coefficients were calculated. Pilot scale scores significantly correlated with scores for UMSARS Parts I, II, and IV as well as with SARA scores. Intra-rater and inter-rater ICCs and Cronbach's alpha coefficients remained high (> 0.94) for all measures. The results of the present study indicate the validity and reliability of the eight-item pilot scale, particularly for the assessment of symptoms in patients with early state multiple system atrophy.

  11. Controlling Guessing Bias in the Dichotomous Rasch Model Applied to a Large-Scale, Vertically Scaled Testing Program

    ERIC Educational Resources Information Center

    Andrich, David; Marais, Ida; Humphry, Stephen Mark

    2016-01-01

    Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…

  12. Fault Tolerant Frequent Pattern Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan

    FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less

  13. PageMan: an interactive ontology tool to generate, display, and annotate overview graphs for profiling experiments.

    PubMed

    Usadel, Björn; Nagel, Axel; Steinhauser, Dirk; Gibon, Yves; Bläsing, Oliver E; Redestig, Henning; Sreenivasulu, Nese; Krall, Leonard; Hannah, Matthew A; Poree, Fabien; Fernie, Alisdair R; Stitt, Mark

    2006-12-18

    Microarray technology has become a widely accepted and standardized tool in biology. The first microarray data analysis programs were developed to support pair-wise comparison. However, as microarray experiments have become more routine, large scale experiments have become more common, which investigate multiple time points or sets of mutants or transgenics. To extract biological information from such high-throughput expression data, it is necessary to develop efficient analytical platforms, which combine manually curated gene ontologies with efficient visualization and navigation tools. Currently, most tools focus on a few limited biological aspects, rather than offering a holistic, integrated analysis. Here we introduce PageMan, a multiplatform, user-friendly, and stand-alone software tool that annotates, investigates, and condenses high-throughput microarray data in the context of functional ontologies. It includes a GUI tool to transform different ontologies into a suitable format, enabling the user to compare and choose between different ontologies. It is equipped with several statistical modules for data analysis, including over-representation analysis and Wilcoxon statistical testing. Results are exported in a graphical format for direct use, or for further editing in graphics programs.PageMan provides a fast overview of single treatments, allows genome-level responses to be compared across several microarray experiments covering, for example, stress responses at multiple time points. This aids in searching for trait-specific changes in pathways using mutants or transgenics, analyzing development time-courses, and comparison between species. In a case study, we analyze the results of publicly available microarrays of multiple cold stress experiments using PageMan, and compare the results to a previously published meta-analysis.PageMan offers a complete user's guide, a web-based over-representation analysis as well as a tutorial, and is freely available at http://mapman.mpimp-golm.mpg.de/pageman/. PageMan allows multiple microarray experiments to be efficiently condensed into a single page graphical display. The flexible interface allows data to be quickly and easily visualized, facilitating comparisons within experiments and to published experiments, thus enabling researchers to gain a rapid overview of the biological responses in the experiments.

  14. A practical overview and comparison of certain commercial forensic software tools for processing large-scale digital investigations

    NASA Astrophysics Data System (ADS)

    Kröger, Knut; Creutzburg, Reiner

    2013-05-01

    The aim of this paper is to show the usefulness of modern forensic software tools for processing large-scale digital investigations. In particular, we focus on the new version of Nuix 4.2 and compare it with AccessData FTK 4.2, X-Ways Forensics 16.9 and Guidance Encase Forensic 7 regarding its performance, functionality, usability and capability. We will show how these software tools work with large forensic images and how capable they are in examining complex and big data scenarios.

  15. Comparison of carrier multiplication yields in PbS and PbSe nanocrystals: the role of competing energy-loss processes.

    PubMed

    Stewart, John T; Padilha, Lazaro A; Qazilbash, M Mumtaz; Pietryga, Jeffrey M; Midgett, Aaron G; Luther, Joseph M; Beard, Matthew C; Nozik, Arthur J; Klimov, Victor I

    2012-02-08

    Infrared band gap semiconductor nanocrystals are promising materials for exploring generation III photovoltaic concepts that rely on carrier multiplication or multiple exciton generation, the process in which a single high-energy photon generates more than one electron-hole pair. In this work, we present measurements of carrier multiplication yields and biexciton lifetimes for a large selection of PbS nanocrystals and compare these results to the well-studied PbSe nanocrystals. The similar bulk properties of PbS and PbSe make this an important comparison for discerning the pertinent properties that determine efficient carrier multiplication. We observe that PbS and PbSe have very similar biexciton lifetimes as a function of confinement energy. Together with the similar bulk properties, this suggests that the rates of multiexciton generation, which is the inverse of Auger recombination, are also similar. The carrier multiplication yields in PbS nanocrystals, however, are strikingly lower than those observed for PbSe nanocrystals. We suggest that this implies the rate of competing processes, such as phonon emission, is higher in PbS nanocrystals than in PbSe nanocrystals. Indeed, our estimations for phonon emission mediated by the polar Fröhlich-type interaction indicate that the corresponding energy-loss rate is approximately twice as large in PbS than in PbSe. © 2011 American Chemical Society

  16. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  17. Dynamical tuning for MPC using population games: A water supply network application.

    PubMed

    Barreiro-Gomez, Julian; Ocampo-Martinez, Carlos; Quijano, Nicanor

    2017-07-01

    Model predictive control (MPC) is a suitable strategy for the control of large-scale systems that have multiple design requirements, e.g., multiple physical and operational constraints. Besides, an MPC controller is able to deal with multiple control objectives considering them within the cost function, which implies to determine a proper prioritization for each of the objectives. Furthermore, when the system has time-varying parameters and/or disturbances, the appropriate prioritization might vary along the time as well. This situation leads to the need of a dynamical tuning methodology. This paper addresses the dynamical tuning issue by using evolutionary game theory. The advantages of the proposed method are highlighted and tested over a large-scale water supply network with periodic time-varying disturbances. Finally, results are analyzed with respect to a multi-objective MPC controller that uses static tuning. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Student experiences across multiple flipped courses in a single curriculum.

    PubMed

    Khanova, Julia; Roth, Mary T; Rodgers, Jo Ellen; McLaughlin, Jacqueline E

    2015-10-01

    The flipped classroom approach has garnered significant attention in health professions education, which has resulted in calls for curriculum-wide implementations of the model. However, research to support the development of evidence-based guidelines for large-scale flipped classroom implementations is lacking. This study was designed to examine how students experience the flipped classroom model of learning in multiple courses within a single curriculum, as well as to identify specific elements of flipped learning that students perceive as beneficial or challenging. A qualitative analysis of students' comments (n = 6010) from mid-course and end-of-course evaluations of 10 flipped courses (in 2012-2014) was conducted. Common and recurring themes were identified through systematic iterative coding and sorting using the constant comparison method. Multiple coders, agreement through consensus and member checking were utilised to ensure the trustworthiness of findings. Several themes emerged from the analysis: (i) the perceived advantages of flipped learning coupled with concerns about implementation; (ii) the benefits of pre-class learning and factors that negatively affect these benefits, such as quality and quantity of learning materials, as well as overall increase in workload, especially in the context of multiple concurrent flipped courses; (iii) the role of the instructor in the flipped learning environment, particularly in engaging students in active learning and ensuring instructional alignment, and (iv) the need for assessments that emphasise the application of knowledge and critical thinking skills. Analysis of data from 10 flipped courses provided insight into common patterns of student learning experiences specific to the flipped learning model within a single curriculum. The study points to the challenges associated with scaling the implementation of the flipped classroom across multiple courses. Several core elements critical to the effective design and implementation of the flipped classroom model are identified. © 2015 John Wiley & Sons Ltd.

  19. Comparisons of survival predictions using survival risk ratios based on International Classification of Diseases, Ninth Revision and Abbreviated Injury Scale trauma diagnosis codes.

    PubMed

    Clarke, John R; Ragone, Andrew V; Greenwald, Lloyd

    2005-09-01

    We conducted a comparison of methods for predicting survival using survival risk ratios (SRRs), including new comparisons based on International Classification of Diseases, Ninth Revision (ICD-9) versus Abbreviated Injury Scale (AIS) six-digit codes. From the Pennsylvania trauma center's registry, all direct trauma admissions were collected through June 22, 1999. Patients with no comorbid medical diagnoses and both ICD-9 and AIS injury codes were used for comparisons based on a single set of data. SRRs for ICD-9 and then for AIS diagnostic codes were each calculated two ways: from the survival rate of patients with each diagnosis and when each diagnosis was an isolated diagnosis. Probabilities of survival for the cohort were calculated using each set of SRRs by the multiplicative ICISS method and, where appropriate, the minimum SRR method. These prediction sets were then internally validated against actual survival by the Hosmer-Lemeshow goodness-of-fit statistic. The 41,364 patients had 1,224 different ICD-9 injury diagnoses in 32,261 combinations and 1,263 corresponding AIS injury diagnoses in 31,755 combinations, ranging from 1 to 27 injuries per patient. All conventional ICD-9-based combinations of SRRs and methods had better Hosmer-Lemeshow goodness-of-fit statistic fits than their AIS-based counterparts. The minimum SRR method produced better calibration than the multiplicative methods, presumably because it did not magnify inaccuracies in the SRRs that might occur with multiplication. Predictions of survival based on anatomic injury alone can be performed using ICD-9 codes, with no advantage from extra coding of AIS diagnoses. Predictions based on the single worst SRR were closer to actual outcomes than those based on multiplying SRRs.

  20. Exclusive measurements of mean pion multiplicities in 4He-nucleus reactions from 200 to 800 MeV/nucleon

    NASA Technical Reports Server (NTRS)

    L'Hote, D.; Alard, J. P.; Augerat, J.; Babinet, R.; Brochard, F.; Fodor, Z.; Fraysse, L.; Girard, J.; Gorodetzky, P.; Gosset, J.; hide

    1987-01-01

    Mean multiplicities of pi+ and pi- in 4He collisions with C, Cu, and Pb at 200, 600, and 800 MeV/u, and with C and Pb at 400 MeV/u have been measured using the large solid angle detector Diogene. The independence of pion multiplicity on projectile incident energy, target mass and proton multiplicity is studied in comparison with intra-nuclear cascade predictions. The discrepancy between experimental results and theory is pointed out and discussed.

  1. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing.

    PubMed

    Oono, Ryoko

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.

  2. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing

    PubMed Central

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889

  3. Measuring the impact of multiple sclerosis on psychosocial functioning: the development of a new self-efficacy scale.

    PubMed

    Airlie, J; Baker, G A; Smith, S J; Young, C A

    2001-06-01

    To develop a scale to measure self-efficacy in neurologically impaired patients with multiple sclerosis and to assess the scale's psychometric properties. Cross-sectional questionnaire study in a clinical setting, the retest questionnaire returned by mail after completion at home. Regional multiple sclerosis (MS) outpatient clinic or the Clinical Trials Unit (CTU) at a large neuroscience centre in the UK. One hundred persons with MS attending the Walton Centre for Neurology and Neurosurgery and Clatterbridge Hospital, Wirral, as outpatients. Cognitively impaired patients were excluded at an initial clinic assessment. Patients were asked to provide demographic data and complete the self-efficacy scale along with the following validated scales: Hospital Anxiety and Depression Scale, Rosenberg Self-Esteem Scale, Impact, Stigma and Mastery and Rankin Scales. The Rankin Scale and Barthel Index were also assessed by the physician. A new 11-item self-efficacy scale was constructed consisting of two domains of control and personal agency. The validity of the scale was confirmed using Cronbach's alpha analysis of internal consistency (alpha = 0.81). The test-retest reliability of the scale over two weeks was acceptable with an intraclass correlation coefficient of 0.79. Construct validity was investigated using Pearson's product moment correlation coefficient resulting in significant correlations with depression (r= -0.52) anxiety (r =-0.50) and mastery (r= 0.73). Multiple regression analysis demonstrated that these factors accounted for 70% of the variance of scores on the self-efficacy scale, with scores on mastery, anxiety and perceived disability being independently significant. Assessment of the psychometric properties of this new self-efficacy scale suggest that it possesses good validity and reliability in patients with multiple sclerosis.

  4. Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers

    NASA Technical Reports Server (NTRS)

    Bretherton, Christopher S.

    2002-01-01

    The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.

  5. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model.

    PubMed

    Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir

    2018-04-10

    We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .

  6. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model

    DOE PAGES

    Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...

    2018-03-15

    Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less

  7. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco

    Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less

  8. Thermal/structural modeling of a large scale in situ overtest experiment for defense high level waste at the Waste Isolation Pilot Plant Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.; Stone, C.M.; Krieg, R.D.

    Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less

  9. Development and validation of Big Four personality scales for the Schedule for Nonadaptive and Adaptive Personality--Second Edition (SNAP-2).

    PubMed

    Calabrese, William R; Rudick, Monica M; Simms, Leonard J; Clark, Lee Anna

    2012-09-01

    Recently, integrative, hierarchical models of personality and personality disorder (PD)--such as the Big Three, Big Four, and Big Five trait models--have gained support as a unifying dimensional framework for describing PD. However, no measures to date can simultaneously represent each of these potentially interesting levels of the personality hierarchy. To unify these measurement models psychometrically, we sought to develop Big Five trait scales within the Schedule for Nonadaptive and Adaptive Personality--Second Edition (SNAP-2). Through structural and content analyses, we examined relations between the SNAP-2, the Big Five Inventory (BFI), and the NEO Five-Factor Inventory (NEO-FFI) ratings in a large data set (N = 8,690), including clinical, military, college, and community participants. Results yielded scales consistent with the Big Four model of personality (i.e., Neuroticism, Conscientiousness, Introversion, and Antagonism) and not the Big Five, as there were insufficient items related to Openness. Resulting scale scores demonstrated strong internal consistency and temporal stability. Structural validity and external validity were supported by strong convergent and discriminant validity patterns between Big Four scale scores and other personality trait scores and expectable patterns of self-peer agreement. Descriptive statistics and community-based norms are provided. The SNAP-2 Big Four Scales enable researchers and clinicians to assess personality at multiple levels of the trait hierarchy and facilitate comparisons among competing big-trait models. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  10. Development and Validation of Big Four Personality Scales for the Schedule for Nonadaptive and Adaptive Personality-2nd Edition (SNAP-2)

    PubMed Central

    Calabrese, William R.; Rudick, Monica M.; Simms, Leonard J.; Clark, Lee Anna

    2012-01-01

    Recently, integrative, hierarchical models of personality and personality disorder (PD)—such as the Big Three, Big Four and Big Five trait models—have gained support as a unifying dimensional framework for describing PD. However, no measures to date can simultaneously represent each of these potentially interesting levels of the personality hierarchy. To unify these measurement models psychometrically, we sought to develop Big Five trait scales within the Schedule for Adaptive and Nonadaptive Personality–2nd Edition (SNAP-2). Through structural and content analyses, we examined relations between the SNAP-2, Big Five Inventory (BFI), and NEO-Five Factor Inventory (NEO-FFI) ratings in a large data set (N = 8,690), including clinical, military, college, and community participants. Results yielded scales consistent with the Big Four model of personality (i.e., Neuroticism, Conscientiousness, Introversion, and Antagonism) and not the Big Five as there were insufficient items related to Openness. Resulting scale scores demonstrated strong internal consistency and temporal stability. Structural and external validity was supported by strong convergent and discriminant validity patterns between Big Four scale scores and other personality trait scores and expectable patterns of self-peer agreement. Descriptive statistics and community-based norms are provided. The SNAP-2 Big Four Scales enable researchers and clinicians to assess personality at multiple levels of the trait hierarchy and facilitate comparisons among competing “Big Trait” models. PMID:22250598

  11. L2-norm multiple kernel learning and its application to biomedical data fusion

    PubMed Central

    2010-01-01

    Background This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. Results We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. Conclusions This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL. Availability The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html. PMID:20529363

  12. Modal Testing of the NPSAT1 Engineering Development Unit

    DTIC Science & Technology

    2012-07-01

    erkläre ich, dass die vorliegende Master Arbeit von mir selbstständig und nur unter Verwendung der angegebenen Quellen und Hilfsmittel angefertigt...logarithmic scale . As 5 Figure 2 shows, natural frequencies are indicated by large values of the first CMIF (peaks), and multiple modes can be detected by...structure’s behavior. Ewins even states, “that no large- scale modal test should be permitted to proceed until some preliminary SDOF analyses have

  13. A Comparison of Large Lecture, Fully Online, and Hybrid Sections of Introduction to Special Education

    ERIC Educational Resources Information Center

    O'brien, Chris; Hartshorne, Richard; Beattie, John; Jordan, Luann

    2011-01-01

    This study evaluated the effectiveness of flexible learning options at a university serving multiple geographic areas (including remote and rural areas) and age groups by teaching an introduction to special education course to three large groups of pre-teacher education majors using three modes of instruction. The university offered sections as…

  14. Groups of galaxies in the Center for Astrophysics redshift survey

    NASA Technical Reports Server (NTRS)

    Ramella, Massimo; Geller, Margaret J.; Huchra, John P.

    1989-01-01

    By applying the Huchra and Geller (1982) objective group identification algorithm to the Center for Astrophysics' redshift survey, a catalog of 128 groups with three or more members is extracted, and 92 of these are used as a statistical sample. A comparison of the distribution of group centers with the distribution of all galaxies in the survey indicates qualitatively that groups trace the large-scale structure of the region. The physical properties of groups may be related to the details of large-scale structure, and it is concluded that differences among group catalogs may be due to the properties of large-scale structures and their location relative to the survey limits.

  15. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.

  16. FIELD TEST OF CYCLODEXTRIN FOR ENHANCED IN-SITU FLUSHING OF MULTIPLE-COMPONENT IMMISCIBLE ORGANIC LIQUID CONTAMINATION: COMPARISON TO WATER FLUSHING

    EPA Science Inventory

    A pilot-scale field experiment was conducted to compare the remediation effectiveness of an enhanced-solubilization technique to that of water flushing for removal of multicomponent nonaqueous-phase organic liquid (NAPL) contaminants form a phreatic aquifer. This innovative remed...

  17. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE PAGES

    Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...

    2017-02-16

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  18. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.; Halsey, William; Dehoff, Ryan

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  19. The SCALE-UP Project

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    2015-03-01

    The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).

  20. A comparison of large-scale climate signals and the North American Multi-Model Ensemble (NMME) for drought prediction in China

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Chen, Nengcheng; Zhang, Xiang

    2018-02-01

    Drought is an extreme natural disaster that can lead to huge socioeconomic losses. Drought prediction ahead of months is helpful for early drought warning and preparations. In this study, we developed a statistical model, two weighted dynamic models and a statistical-dynamic (hybrid) model for 1-6 month lead drought prediction in China. Specifically, statistical component refers to climate signals weighting by support vector regression (SVR), dynamic components consist of the ensemble mean (EM) and Bayesian model averaging (BMA) of the North American Multi-Model Ensemble (NMME) climatic models, and the hybrid part denotes a combination of statistical and dynamic components by assigning weights based on their historical performances. The results indicate that the statistical and hybrid models show better rainfall predictions than NMME-EM and NMME-BMA models, which have good predictability only in southern China. In the 2011 China winter-spring drought event, the statistical model well predicted the spatial extent and severity of drought nationwide, although the severity was underestimated in the mid-lower reaches of Yangtze River (MLRYR) region. The NMME-EM and NMME-BMA models largely overestimated rainfall in northern and western China in 2011 drought. In the 2013 China summer drought, the NMME-EM model forecasted the drought extent and severity in eastern China well, while the statistical and hybrid models falsely detected negative precipitation anomaly (NPA) in some areas. Model ensembles such as multiple statistical approaches, multiple dynamic models or multiple hybrid models for drought predictions were highlighted. These conclusions may be helpful for drought prediction and early drought warnings in China.

  1. Use of Second Generation Coated Conductors for Efficient Shielding of dc Magnetic Fields (Postprint)

    DTIC Science & Technology

    2010-07-15

    layer of superconducting film, can attenuate an external magnetic field of up to 5 mT by more than an order of magnitude. For comparison purposes...appears to be especially promising for the realization of large scale high-Tc superconducting screens. 15. SUBJECT TERMS magnetic screens, current...realization of large scale high-Tc superconducting screens. © 2010 American Institute of Physics. doi:10.1063/1.3459895 I. INTRODUCTION Magnetic screening

  2. Alternative projections of the impacts of private investment on southern forests: a comparison of two large-scale forest sector models of the United States.

    Treesearch

    Ralph Alig; Darius Adams; John Mills; Richard Haynes; Peter Ince; Robert Moulton

    2001-01-01

    The TAMM/NAPAP/ATLAS/AREACHANGE(TNAA) system and the Forest and Agriculture Sector Optimization Model (FASOM) are two large-scale forestry sector modeling systems that have been employed to analyze the U.S. forest resource situation. The TNAA system of static, spatial equilibrium models has been applied to make SO-year projections of the U.S. forest sector for more...

  3. Measuring the Large-scale Solar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.

    2017-12-01

    The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.

  4. Large-scale assessment of benthic communities across multiple marine protected areas using an autonomous underwater vehicle.

    PubMed

    Ferrari, Renata; Marzinelli, Ezequiel M; Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F; Byrne, Maria; Malcolm, Hamish A; Williams, Stefan B; Steinberg, Peter D

    2018-01-01

    Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate 'no-take' and 'general-use' (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5-10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales.

  5. Large-scale assessment of benthic communities across multiple marine protected areas using an autonomous underwater vehicle

    PubMed Central

    Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F.; Byrne, Maria; Malcolm, Hamish A.; Williams, Stefan B.; Steinberg, Peter D.

    2018-01-01

    Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate ‘no-take’ and ‘general-use’ (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5–10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales. PMID:29547656

  6. Scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted cylindrical element

    NASA Astrophysics Data System (ADS)

    Tang, Zhanqi; Jiang, Nan

    2018-05-01

    This study reports the modifications of scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted circular cylinder. Hot-wire measurements were executed at multiple streamwise and wall-normal wise locations downstream of the cylindrical element. The streamwise fluctuating signals were decomposed into large-, small-, and dissipative-scale signatures by corresponding cutoff filters. The scale interaction under the cylindrical perturbation was elaborated by comparing the small- and dissipative-scale amplitude/frequency modulation effects downstream of the cylinder element with the results observed in the unperturbed case. It was obtained that the large-scale fluctuations perform a stronger amplitude modulation on both the small and dissipative scales in the near-wall region. At the wall-normal positions of the cylinder height, the small-scale amplitude modulation coefficients are redistributed by the cylinder wake. The similar observation was noted in small-scale frequency modulation; however, the dissipative-scale frequency modulation seems to be independent of the cylindrical perturbation. The phase-relationship observation indicated that the cylindrical perturbation shortens the time shifts between both the small- and dissipative-scale variations (amplitude and frequency) and large-scale fluctuations. Then, the integral time scale dependence of the phase-relationship between the small/dissipative scales and large scales was also discussed. Furthermore, the discrepancy of small- and dissipative-scale time shifts relative to the large-scale motions was examined, which indicates that the small-scale amplitude/frequency leads the dissipative scales.

  7. A Higher-Order Generalized Singular Value Decomposition for Comparison of Global mRNA Expression from Multiple Organisms

    PubMed Central

    Ponnapalli, Sri Priya; Saunders, Michael A.; Van Loan, Charles F.; Alter, Orly

    2011-01-01

    The number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing in many areas of science, accompanied by a need for mathematical frameworks that can compare multiple large-scale matrices with different row dimensions. The only such framework to date, the generalized singular value decomposition (GSVD), is limited to two matrices. We mathematically define a higher-order GSVD (HO GSVD) for N≥2 matrices , each with full column rank. Each matrix is exactly factored as Di = UiΣiVT, where V, identical in all factorizations, is obtained from the eigensystem SV = VΛ of the arithmetic mean S of all pairwise quotients of the matrices , i≠j. We prove that this decomposition extends to higher orders almost all of the mathematical properties of the GSVD. The matrix S is nondefective with V and Λ real. Its eigenvalues satisfy λk≥1. Equality holds if and only if the corresponding eigenvector vk is a right basis vector of equal significance in all matrices Di and Dj, that is σi,k/σj,k = 1 for all i and j, and the corresponding left basis vector ui,k is orthogonal to all other vectors in Ui for all i. The eigenvalues λk = 1, therefore, define the “common HO GSVD subspace.” We illustrate the HO GSVD with a comparison of genome-scale cell-cycle mRNA expression from S. pombe, S. cerevisiae and human. Unlike existing algorithms, a mapping among the genes of these disparate organisms is not required. We find that the approximately common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets. Simultaneous reconstruction in the common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous sequence-independent classification of the genes of the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell-cycle peak times are correctly classified. PMID:22216090

  8. Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations

    NASA Astrophysics Data System (ADS)

    Choi, Suk-Jin; Lee, Dong-Kyou

    2016-06-01

    This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.

  9. Relative importance of climate changes at different time scales on net primary productivity-a case study of the Karst area of northwest Guangxi, China.

    PubMed

    Liu, Huiyu; Zhang, Mingyang; Lin, Zhenshan

    2017-10-05

    Climate changes are considered to significantly impact net primary productivity (NPP). However, there are few studies on how climate changes at multiple time scales impact NPP. With MODIS NPP product and station-based observations of sunshine duration, annual average temperature and annual precipitation, impacts of climate changes at different time scales on annual NPP, have been studied with EEMD (ensemble empirical mode decomposition) method in the Karst area of northwest Guangxi, China, during 2000-2013. Moreover, with partial least squares regression (PLSR) model, the relative importance of climatic variables for annual NPP has been explored. The results show that (1) only at quasi 3-year time scale do sunshine duration and temperature have significantly positive relations with NPP. (2) Annual precipitation has no significant relation to NPP by direct comparison, but significantly positive relation at 5-year time scale, which is because 5-year time scale is not the dominant scale of precipitation; (3) the changes of NPP may be dominated by inter-annual variabilities. (4) Multiple time scales analysis will greatly improve the performance of PLSR model for estimating NPP. The variable importance in projection (VIP) scores of sunshine duration and temperature at quasi 3-year time scale, and precipitation at quasi 5-year time scale are greater than 0.8, indicating important for NPP during 2000-2013. However, sunshine duration and temperature at quasi 3-year time scale are much more important. Our results underscore the importance of multiple time scales analysis for revealing the relations of NPP to changing climate.

  10. Evaluation of Moderate-Resolution Imaging Spectroradiometer (MODIS) Snow Albedo Product (MCD43A) over Tundra

    NASA Technical Reports Server (NTRS)

    Wang, Zhuosen; Schaaf, Crystal B.; Chopping, Mark J.; Strahler, Alan H.; Wang, Jindi; Roman, Miguel O.; Rocha, Adrian V.; Woodcock, Curtis E.; Shuai, Yanmin

    2012-01-01

    This study assesses the MODIS standard Bidirectional Reflectance Distribution Function (BRDF)/Albedo product, and the daily Direct Broadcast BRDF/Albedo algorithm at tundra locations under large solar zenith angles and high anisotropic diffuse illumination and multiple scattering conditions. These products generally agree with ground-based albedo measurements during the snow cover period when the Solar Zenith Angle (SZA) is less than 70deg. An integrated validation strategy, including analysis of the representativeness of the surface heterogeneity, is performed to decide whether direct comparisons between field measurements and 500- m satellite products were appropriate or if the scaling of finer spatial resolution airborne or spaceborne data was necessary. Results indicate that the Root Mean Square Errors (RMSEs) are less than 0.047 during the snow covered periods for all MCD43 albedo products at several Alaskan tundra areas. The MCD43 1- day daily albedo product is particularly well suited to capture the rapidly changing surface conditions during the spring snow melt. Results also show that a full expression of the blue sky albedo is necessary at these large SZA snow covered areas because of the effects of anisotropic diffuse illumination and multiple scattering. In tundra locations with dark residue as a result of fire, the MODIS albedo values are lower than those at the unburned site from the start of snowmelt.

  11. Integral criteria for large-scale multiple fingerprint solutions

    NASA Astrophysics Data System (ADS)

    Ushmaev, Oleg S.; Novikov, Sergey O.

    2004-08-01

    We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.

  12. SANDO syndrome in a cohort of 107 patients with CPEO and mitochondrial DNA deletions.

    PubMed

    Hanisch, Frank; Kornhuber, Malte; Alston, Charlotte L; Taylor, Robert W; Deschauer, Marcus; Zierz, Stephan

    2015-06-01

    The sensory ataxic neuropathy with dysarthria and ophthalmoparesis (SANDO) syndrome is a subgroup of mitochondrial chronic progressive external ophthalmoplegia (CPEO)-plus disorders associated with multiple mitochondrial DNA (mtDNA) deletions. There is no systematic survey on SANDO in patients with CPEO with either single or multiple large-scale mtDNA deletions. In this retrospective analysis, we characterised the frequency, the genetic and clinical phenotype of 107 index patients with mitochondrial CPEO (n=66 patients with single and n=41 patients with multiple mtDNA deletions) and assessed these for clinical evidence of a SANDO phenotype. Patients with multiple mtDNA deletions were additionally screened for mutations in the nuclear-encoded POLG, SLC25A4, PEO1 and RRM2B genes. The clinical, histological and genetic data of 11 patients with SANDO were further analysed. None of the 66 patients with single, large-scale mtDNA deletions fulfilled the clinical criteria of SANDO syndrome. In contrast, 9 of 41 patients (22%) with multiple mtDNA deletions and two additional family members fulfilled the clinical criteria for SANDO. Within this subgroup, multiple mtDNA deletions were associated with the following nuclear mutations: POLG (n=6), PEO1 (n=2), unidentified (n=2). The combination of sensory ataxic neuropathy with ophthalmoparesis (SANO) was observed in 70% of patients with multiple mtDNA deletions but only in 4% with single deletions. The combination of CPEO and sensory ataxic neuropathy (SANO, incomplete SANDO) was found in 43% of patients with multiple mtDNA deletions but not in patients with single deletions. The SANDO syndrome seems to indicate a cluster of symptoms within the wide range of multisystemic symptoms associated with mitochondrial CPEO. SANO seems to be the most frequent phenotype associated with multiple mtDNA deletions in our cohort but not or is rarely associated with single, large-scale mtDNA deletions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. Managing aquatic ecosystems and water resources under multiple stress--an introduction to the MARS project.

    PubMed

    Hering, Daniel; Carvalho, Laurence; Argillier, Christine; Beklioglu, Meryem; Borja, Angel; Cardoso, Ana Cristina; Duel, Harm; Ferreira, Teresa; Globevnik, Lidija; Hanganu, Jenica; Hellsten, Seppo; Jeppesen, Erik; Kodeš, Vit; Solheim, Anne Lyche; Nõges, Tiina; Ormerod, Steve; Panagopoulos, Yiannis; Schmutz, Stefan; Venohr, Markus; Birk, Sebastian

    2015-01-15

    Water resources globally are affected by a complex mixture of stressors resulting from a range of drivers, including urban and agricultural land use, hydropower generation and climate change. Understanding how stressors interfere and impact upon ecological status and ecosystem services is essential for developing effective River Basin Management Plans and shaping future environmental policy. This paper details the nature of these problems for Europe's water resources and the need to find solutions at a range of spatial scales. In terms of the latter, we describe the aims and approaches of the EU-funded project MARS (Managing Aquatic ecosystems and water Resources under multiple Stress) and the conceptual and analytical framework that it is adopting to provide this knowledge, understanding and tools needed to address multiple stressors. MARS is operating at three scales: At the water body scale, the mechanistic understanding of stressor interactions and their impact upon water resources, ecological status and ecosystem services will be examined through multi-factorial experiments and the analysis of long time-series. At the river basin scale, modelling and empirical approaches will be adopted to characterise relationships between multiple stressors and ecological responses, functions, services and water resources. The effects of future land use and mitigation scenarios in 16 European river basins will be assessed. At the European scale, large-scale spatial analysis will be carried out to identify the relationships amongst stress intensity, ecological status and service provision, with a special focus on large transboundary rivers, lakes and fish. The project will support managers and policy makers in the practical implementation of the Water Framework Directive (WFD), of related legislation and of the Blueprint to Safeguard Europe's Water Resources by advising the 3rd River Basin Management Planning cycle, the revision of the WFD and by developing new tools for diagnosing and predicting multiple stressors. Copyright © 2014. Published by Elsevier B.V.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  15. Large-scale recovery of an endangered amphibian despite ongoing exposure to multiple stressors

    USGS Publications Warehouse

    Knapp, Roland A.; Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A. W.; Vrendenburg, Vance T.; Rosenblum, Erica Bree; Briggs, Cheryl J.

    2016-01-01

    Amphibians are one of the most threatened animal groups, with 32% of species at risk for extinction. Given this imperiled status, is the disappearance of a large fraction of the Earth’s amphibians inevitable, or are some declining species more resilient than is generally assumed? We address this question in a species that is emblematic of many declining amphibians, the endangered Sierra Nevada yellow-legged frog (Rana sierrae). Based on >7,000 frog surveys conducted across Yosemite National Park over a 20-y period, we show that, after decades of decline and despite ongoing exposure to multiple stressors, including introduced fish, the recently emerged disease chytridiomycosis, and pesticides, R. sierrae abundance increased sevenfold during the study and at a rate of 11% per year. These increases occurred in hundreds of populations throughout Yosemite, providing a rare example of amphibian recovery at an ecologically relevant spatial scale. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis. The disappearance of nonnative fish from numerous water bodies after cessation of stocking also contributed to the recovery. The large-scale increases in R. sierrae abundance that we document suggest that, when habitats are relatively intact and stressors are reduced in their importance by active management or species’ adaptive responses, declines of some amphibians may be partially reversible, at least at a regional scale. Other studies conducted over similarly large temporal and spatial scales are critically needed to provide insight and generality about the reversibility of amphibian declines at a global scale.

  16. Large-scale recovery of an endangered amphibian despite ongoing exposure to multiple stressors.

    PubMed

    Knapp, Roland A; Fellers, Gary M; Kleeman, Patrick M; Miller, David A W; Vredenburg, Vance T; Rosenblum, Erica Bree; Briggs, Cheryl J

    2016-10-18

    Amphibians are one of the most threatened animal groups, with 32% of species at risk for extinction. Given this imperiled status, is the disappearance of a large fraction of the Earth's amphibians inevitable, or are some declining species more resilient than is generally assumed? We address this question in a species that is emblematic of many declining amphibians, the endangered Sierra Nevada yellow-legged frog (Rana sierrae). Based on >7,000 frog surveys conducted across Yosemite National Park over a 20-y period, we show that, after decades of decline and despite ongoing exposure to multiple stressors, including introduced fish, the recently emerged disease chytridiomycosis, and pesticides, R. sierrae abundance increased sevenfold during the study and at a rate of 11% per year. These increases occurred in hundreds of populations throughout Yosemite, providing a rare example of amphibian recovery at an ecologically relevant spatial scale. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis. The disappearance of nonnative fish from numerous water bodies after cessation of stocking also contributed to the recovery. The large-scale increases in R. sierrae abundance that we document suggest that, when habitats are relatively intact and stressors are reduced in their importance by active management or species' adaptive responses, declines of some amphibians may be partially reversible, at least at a regional scale. Other studies conducted over similarly large temporal and spatial scales are critically needed to provide insight and generality about the reversibility of amphibian declines at a global scale.

  17. Large-scale recovery of an endangered amphibian despite ongoing exposure to multiple stressors

    PubMed Central

    Knapp, Roland A.; Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A. W.; Rosenblum, Erica Bree; Briggs, Cheryl J.

    2016-01-01

    Amphibians are one of the most threatened animal groups, with 32% of species at risk for extinction. Given this imperiled status, is the disappearance of a large fraction of the Earth’s amphibians inevitable, or are some declining species more resilient than is generally assumed? We address this question in a species that is emblematic of many declining amphibians, the endangered Sierra Nevada yellow-legged frog (Rana sierrae). Based on >7,000 frog surveys conducted across Yosemite National Park over a 20-y period, we show that, after decades of decline and despite ongoing exposure to multiple stressors, including introduced fish, the recently emerged disease chytridiomycosis, and pesticides, R. sierrae abundance increased sevenfold during the study and at a rate of 11% per year. These increases occurred in hundreds of populations throughout Yosemite, providing a rare example of amphibian recovery at an ecologically relevant spatial scale. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis. The disappearance of nonnative fish from numerous water bodies after cessation of stocking also contributed to the recovery. The large-scale increases in R. sierrae abundance that we document suggest that, when habitats are relatively intact and stressors are reduced in their importance by active management or species’ adaptive responses, declines of some amphibians may be partially reversible, at least at a regional scale. Other studies conducted over similarly large temporal and spatial scales are critically needed to provide insight and generality about the reversibility of amphibian declines at a global scale. PMID:27698128

  18. The large-scale modulation of cosmic rays in mid-1982: Its dependence on heliospheric longitude and radius

    NASA Technical Reports Server (NTRS)

    Pyle, K. R.; Simpson, J. A.

    1985-01-01

    Near solar maximum, a series of large radial solar wind shocks in June and July 1982 provided a unique opportunity to study the solar modulation of galactic cosmic rays with an array of spacecraft widely separated both in heliocentric radius and longitude. By eliminating hysteresis effects it is possible to begin to separate radial and azimuthal effects in the outer heliosphere. On the large scale, changes in modulation (both the increasing and recovery phases) propagate outward at close to the solar wind velocity, except for the near-term effects of solar wind shocks, which may propagate at a significantly higher velocity. In the outer heliosphere, azimuthal effects are small in comparison with radial effects for large-scale modulation at solar maximum.

  19. Metabolic Imaging in Multiple Time Scales

    PubMed Central

    Ramanujan, V Krishnan

    2013-01-01

    We report here a novel combination of time-resolved imaging methods for probing mitochondrial metabolism multiple time scales at the level of single cells. By exploiting a mitochondrial membrane potential reporter fluorescence we demonstrate the single cell metabolic dynamics in time scales ranging from milliseconds to seconds to minutes in response to glucose metabolism and mitochondrial perturbations in real time. Our results show that in comparison with normal human mammary epithelial cells, the breast cancer cells display significant alterations in metabolic responses at all measured time scales by single cell kinetics, fluorescence recovery after photobleaching and by scaling analysis of time-series data obtained from mitochondrial fluorescence fluctuations. Furthermore scaling analysis of time-series data in living cells with distinct mitochondrial dysfunction also revealed significant metabolic differences thereby suggesting the broader applicability (e.g. in mitochondrial myopathies and other metabolic disorders) of the proposed strategies beyond the scope of cancer metabolism. We discuss the scope of these findings in the context of developing portable, real-time metabolic measurement systems that can find applications in preclinical and clinical diagnostics. PMID:24013043

  20. Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Fetzer, E.

    2008-12-01

    NASA's Earth Observing System (EOS) is the world's most ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the A-Train platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the cloud scenes from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time matchups between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, and assemble merged datasets for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, and perform pairwise instrument matchups for A-Train datasets. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.

  1. Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Xing, Z.; Fetzer, E.

    2009-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, perform pairwise instrument matchups for A-Train datasets, and compute fused products. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.

  2. Intensity-Duration-Frequency curves from remote sensing datasets: direct comparison of weather radar and CMORPH over the Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Morin, Efrat; Marra, Francesco; Peleg, Nadav; Mei, Yiwen; Anagnostou, Emmanouil N.

    2017-04-01

    Rainfall frequency analysis is used to quantify the probability of occurrence of extreme rainfall and is traditionally based on rain gauge records. The limited spatial coverage of rain gauges is insufficient to sample the spatiotemporal variability of extreme rainfall and to provide the areal information required by management and design applications. Conversely, remote sensing instruments, even if quantitative uncertain, offer coverage and spatiotemporal detail that allow overcoming these issues. In recent years, remote sensing datasets began to be used for frequency analyses, taking advantage of increased record lengths and quantitative adjustments of the data. However, the studies so far made use of concepts and techniques developed for rain gauge (i.e. point or multiple-point) data and have been validated by comparison with gauge-derived analyses. These procedures add further sources of uncertainty and prevent from isolating between data and methodological uncertainties and from fully exploiting the available information. In this study, we step out of the gauge-centered concept presenting a direct comparison between at-site Intensity-Duration-Frequency (IDF) curves derived from different remote sensing datasets on corresponding spatial scales, temporal resolutions and records. We analyzed 16 years of homogeneously corrected and gauge-adjusted C-Band weather radar estimates, high-resolution CMORPH and gauge-adjusted high-resolution CMORPH over the Eastern Mediterranean. Results of this study include: (a) good spatial correlation between radar and satellite IDFs ( 0.7 for 2-5 years return period); (b) consistent correlation and dispersion in the raw and gauge adjusted CMORPH; (c) bias is almost uniform with return period for 12-24 h durations; (d) radar identifies thicker tail distributions than CMORPH and the tail of the distributions depends on the spatial and temporal scales. These results demonstrate the potential of remote sensing datasets for rainfall frequency analysis for management (e.g. warning and early-warning systems) and design (e.g. sewer design, large scale drainage planning)

  3. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  4. Evaluating large-scale health programmes at a district level in resource-limited countries.

    PubMed

    Svoronos, Theodore; Mate, Kedar S

    2011-11-01

    Recent experience in evaluating large-scale global health programmes has highlighted the need to consider contextual differences between sites implementing the same intervention. Traditional randomized controlled trials are ill-suited for this purpose, as they are designed to identify whether an intervention works, not how, when and why it works. In this paper we review several evaluation designs that attempt to account for contextual factors that contribute to intervention effectiveness. Using these designs as a base, we propose a set of principles that may help to capture information on context. Finally, we propose a tool, called a driver diagram, traditionally used in implementation that would allow evaluators to systematically monitor changing dynamics in project implementation and identify contextual variation across sites. We describe an implementation-related example from South Africa to underline the strengths of the tool. If used across multiple sites and multiple projects, the resulting driver diagrams could be pooled together to form a generalized theory for how, when and why a widely-used intervention works. Mechanisms similar to the driver diagram are urgently needed to complement existing evaluations of large-scale implementation efforts.

  5. Phylo-VISTA: Interactive visualization of multiple DNA sequence alignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Nameeta; Couronne, Olivier; Pennacchio, Len A.

    The power of multi-sequence comparison for biological discovery is well established. The need for new capabilities to visualize and compare cross-species alignment data is intensified by the growing number of genomic sequence datasets being generated for an ever-increasing number of organisms. To be efficient these visualization algorithms must support the ability to accommodate consistently a wide range of evolutionary distances in a comparison framework based upon phylogenetic relationships. Results: We have developed Phylo-VISTA, an interactive tool for analyzing multiple alignments by visualizing a similarity measure for multiple DNA sequences. The complexity of visual presentation is effectively organized using a frameworkmore » based upon interspecies phylogenetic relationships. The phylogenetic organization supports rapid, user-guided interspecies comparison. To aid in navigation through large sequence datasets, Phylo-VISTA leverages concepts from VISTA that provide a user with the ability to select and view data at varying resolutions. The combination of multiresolution data visualization and analysis, combined with the phylogenetic framework for interspecies comparison, produces a highly flexible and powerful tool for visual data analysis of multiple sequence alignments. Availability: Phylo-VISTA is available at http://www-gsd.lbl. gov/phylovista. It requires an Internet browser with Java Plugin 1.4.2 and it is integrated into the global alignment program LAGAN at http://lagan.stanford.edu« less

  6. Robot-assisted vs. sensory integration training in treating gait and balance dysfunctions in patients with multiple sclerosis: a randomized controlled trial

    PubMed Central

    Gandolfi, Marialuisa; Geroin, Christian; Picelli, Alessandro; Munari, Daniele; Waldner, Andreas; Tamburin, Stefano; Marchioretto, Fabio; Smania, Nicola

    2014-01-01

    Background: Extensive research on both healthy subjects and patients with central nervous damage has elucidated a crucial role of postural adjustment reactions and central sensory integration processes in generating and “shaping” locomotor function, respectively. Whether robotic-assisted gait devices might improve these functions in Multiple sclerosis (MS) patients is not fully investigated in literature. Purpose: The aim of this study was to compare the effectiveness of end-effector robot-assisted gait training (RAGT) and sensory integration balance training (SIBT) in improving walking and balance performance in patients with MS. Methods: Twenty-two patients with MS (EDSS: 1.5–6.5) were randomly assigned to two groups. The RAGT group (n = 12) underwent end-effector system training. The SIBT group (n = 10) underwent specific balance exercises. Each patient received twelve 50-min treatment sessions (2 days/week). A blinded rater evaluated patients before and after treatment as well as 1 month post treatment. Primary outcomes were walking speed and Berg Balance Scale. Secondary outcomes were the Activities-specific Balance Confidence Scale, Sensory Organization Balance Test, Stabilometric Assessment, Fatigue Severity Scale, cadence, step length, single and double support time, Multiple Sclerosis Quality of Life-54. Results: Between groups comparisons showed no significant differences on primary and secondary outcome measures over time. Within group comparisons showed significant improvements in both groups on the Berg Balance Scale (P = 0.001). Changes approaching significance were found on gait speed (P = 0.07) only in the RAGT group. Significant changes in balance task-related domains during standing and walking conditions were found in the SIBT group. Conclusion: Balance disorders in patients with MS may be ameliorated by RAGT and by SIBT. PMID:24904361

  7. 3D plasmonic nanoantennas integrated with MEA biosensors

    NASA Astrophysics Data System (ADS)

    Dipalo, Michele; Messina, Gabriele C.; Amin, Hayder; La Rocca, Rosanna; Shalabaeva, Victoria; Simi, Alessandro; Maccione, Alessandro; Zilio, Pierfrancesco; Berdondini, Luca; de Angelis, Francesco

    2015-02-01

    Neuronal signaling in brain circuits occurs at multiple scales ranging from molecules and cells to large neuronal assemblies. However, current sensing neurotechnologies are not designed for parallel access of signals at multiple scales. With the aim of combining nanoscale molecular sensing with electrical neural activity recordings within large neuronal assemblies, in this work three-dimensional (3D) plasmonic nanoantennas are integrated with multielectrode arrays (MEA). Nanoantennas are fabricated by fast ion beam milling on optical resist; gold is deposited on the nanoantennas in order to connect them electrically to the MEA microelectrodes and to obtain plasmonic behavior. The optical properties of these 3D nanostructures are studied through finite elements method (FEM) simulations that show a high electromagnetic field enhancement. This plasmonic enhancement is confirmed by surface enhancement Raman spectroscopy of a dye performed in liquid, which presents an enhancement of almost 100 times the incident field amplitude at resonant excitation. Finally, the reported MEA devices are tested on cultured rat hippocampal neurons. Neurons develop by extending branches on the nanostructured electrodes and extracellular action potentials are recorded over multiple days in vitro. Raman spectra of living neurons cultured on the nanoantennas are also acquired. These results highlight that these nanostructures could be potential candidates for combining electrophysiological measures of large networks with simultaneous spectroscopic investigations at the molecular level.Neuronal signaling in brain circuits occurs at multiple scales ranging from molecules and cells to large neuronal assemblies. However, current sensing neurotechnologies are not designed for parallel access of signals at multiple scales. With the aim of combining nanoscale molecular sensing with electrical neural activity recordings within large neuronal assemblies, in this work three-dimensional (3D) plasmonic nanoantennas are integrated with multielectrode arrays (MEA). Nanoantennas are fabricated by fast ion beam milling on optical resist; gold is deposited on the nanoantennas in order to connect them electrically to the MEA microelectrodes and to obtain plasmonic behavior. The optical properties of these 3D nanostructures are studied through finite elements method (FEM) simulations that show a high electromagnetic field enhancement. This plasmonic enhancement is confirmed by surface enhancement Raman spectroscopy of a dye performed in liquid, which presents an enhancement of almost 100 times the incident field amplitude at resonant excitation. Finally, the reported MEA devices are tested on cultured rat hippocampal neurons. Neurons develop by extending branches on the nanostructured electrodes and extracellular action potentials are recorded over multiple days in vitro. Raman spectra of living neurons cultured on the nanoantennas are also acquired. These results highlight that these nanostructures could be potential candidates for combining electrophysiological measures of large networks with simultaneous spectroscopic investigations at the molecular level. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr05578k

  8. Equating in Small-Scale Language Testing Programs

    ERIC Educational Resources Information Center

    LaFlair, Geoffrey T.; Isbell, Daniel; May, L. D. Nicolas; Gutierrez Arvizu, Maria Nelly; Jamieson, Joan

    2017-01-01

    Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by…

  9. Future of applied watershed science at regional scales

    Treesearch

    Lee Benda; Daniel Miller; Steve Lanigan; Gordon Reeves

    2009-01-01

    Resource managers must deal increasingly with land use and conservation plans applied at large spatial scales (watersheds, landscapes, states, regions) involving multiple interacting federal agencies and stakeholders. Access to a geographically focused and application-oriented database would allow users in different locations and with different concerns to quickly...

  10. Spotted Towhee population dynamics in a riparian restoration context

    Treesearch

    Stacy L. Small; Frank R., III Thompson; Geoffery R. Geupel; John Faaborg

    2007-01-01

    We investigated factors at multiple scales that might influence nest predation risk for Spotted Towhees (Pipilo maculates) along the Sacramento River, California, within the context of large-scale riparian habitat restoration. We used the logistic-exposure method and Akaike's information criterion (AIC) for model selection to compare predator...

  11. FLARE: a New User Facility for Studies of Magnetic Reconnection Through Simultaneous, in-situ Measurements on MHD Scales, Ion Scales and Electron Scales

    NASA Astrophysics Data System (ADS)

    Ji, H.; Bhattacharjee, A.; Goodman, A.; Prager, S.; Daughton, W. S.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Kozub, T.; Jara-Almonte, J.; Myers, C. E.; Ren, Y.; Sloboda, P.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S. E.; Drake, J. F.; Egedal, J.; Sarff, J.; Wallace, J.

    2017-12-01

    The FLARE device (Facility for Laboratory Reconnection Experiments; flare.pppl.gov) is a new laboratory experiment under construction at Princeton for the studies of magnetic reconnection in the multiple X-line regimes directly relevant to space, solar, astrophysical, and fusion plasmas, as guided by a reconnection phase diagram [Ji & Daughton, (2011)]. The whole device has been successfully assembled with rough leak check completed. The first plasmas are expected in the fall to winter. The main diagnostic is an extensive set of magnetic probe arrays to cover multiple scales from local electron scales ( ˜2 mm), to intermediate ion scales ( ˜10 cm), and global MHD scales ( ˜1 m), simultaneously providing in-situ measurements over all these relevant scales. By using these laboratory data, not only the detailed spatial profiles around each reconnecting X-line are available for direct comparisons with spacecraft data, but also the global conditions and consequences of magnetic reconnection, which are often difficult to quantify in space, can be controlled or studied systematically. The planned procedures and example topics as a user facility will be discussed in detail.

  12. Multidimensional scaling for evolutionary algorithms--visualization of the path through search space and solution space using Sammon mapping.

    PubMed

    Pohlheim, Hartmut

    2006-01-01

    Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).

  13. Process, pattern and scale: hydrogeomorphology and plant diversity in forested wetlands across multiple spatial scales

    NASA Astrophysics Data System (ADS)

    Alexander, L.; Hupp, C. R.; Forman, R. T.

    2002-12-01

    Many geodisturbances occur across large spatial scales, spanning entire landscapes and creating ecological phenomena in their wake. Ecological study at large scales poses special problems: (1) large-scale studies require large-scale resources, and (2) sampling is not always feasible at the appropriate scale, and researchers rely on data collected at smaller scales to interpret patterns across broad regions. A criticism of landscape ecology is that findings at small spatial scales are "scaled up" and applied indiscriminately across larger spatial scales. In this research, landscape scaling is addressed through process-pattern relationships between hydrogeomorphic processes and patterns of plant diversity in forested wetlands. The research addresses: (1) whether patterns and relationships between hydrogeomorphic, vegetation, and spatial variables can transcend scale; and (2) whether data collected at small spatial scales can be used to describe patterns and relationships across larger spatial scales. Field measurements of hydrologic, geomorphic, spatial, and vegetation data were collected or calculated for 15- 1-ha sites on forested floodplains of six (6) Chesapeake Bay Coastal Plain streams over a total area of about 20,000 km2. Hydroperiod (day/yr), floodplain surface elevation range (m), discharge (m3/s), stream power (kg-m/s2), sediment deposition (mm/yr), relative position downstream and other variables were used in multivariate analyses to explain differences in species richness, tree diversity (Shannon-Wiener Diversity Index H'), and plant community composition at four spatial scales. Data collected at the plot (400-m2) and site- (c. 1-ha) scales are applied to and tested at the river watershed and regional spatial scales. Results indicate that plant species richness and tree diversity (Shannon-Wiener diversity index H') can be described by hydrogeomorphic conditions at all scales, but are best described at the site scale. Data collected at plot and site scales are tested for spatial heterogeneity across the Chesapeake Bay Coastal Plain using a geostatistical variogram, and multiple regression analysis is used to relate plant diversity, spatial, and hydrogeomorphic variables across Coastal Plain regions and hydrologic regimes. Results indicate that relationships between hydrogeomorphic processes and patterns of plant diversity at finer scales can proxy relationships at coarser scales in some, not all, cases. Findings also suggest that data collected at small scales can be used to describe trends across broader scales under limited conditions.

  14. Assessing Hydrological and Energy Budgets in Amazonia through Regional Downscaling, and Comparisons with Global Reanalysis Products

    NASA Astrophysics Data System (ADS)

    Nunes, A.; Ivanov, V. Y.

    2014-12-01

    Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.

  15. The relativistic feedback discharge model of terrestrial gamma ray flashes

    NASA Astrophysics Data System (ADS)

    Dwyer, Joseph R.

    2012-02-01

    As thunderclouds charge, the large-scale fields may approach the relativistic feedback threshold, above which the production of relativistic runaway electron avalanches becomes self-sustaining through the generation of backward propagating runaway positrons and backscattered X-rays. Positive intracloud (IC) lightning may force the large-scale electric fields inside thunderclouds above the relativistic feedback threshold, causing the number of runaway electrons, and the resulting X-ray and gamma ray emission, to grow exponentially, producing very large fluxes of energetic radiation. As the flux of runaway electrons increases, ionization eventually causes the electric field to discharge, bringing the field below the relativistic feedback threshold again and reducing the flux of runaway electrons. These processes are investigated with a new model that includes the production, propagation, diffusion, and avalanche multiplication of runaway electrons; the production and propagation of X-rays and gamma rays; and the production, propagation, and annihilation of runaway positrons. In this model, referred to as the relativistic feedback discharge model, the large-scale electric fields are calculated self-consistently from the charge motion of the drifting low-energy electrons and ions, produced from the ionization of air by the runaway electrons, including two- and three-body attachment and recombination. Simulation results show that when relativistic feedback is considered, bright gamma ray flashes are a natural consequence of upward +IC lightning propagating in large-scale thundercloud fields. Furthermore, these flashes have the same time structures, including both single and multiple pulses, intensities, angular distributions, current moments, and energy spectra as terrestrial gamma ray flashes, and produce large current moments that should be observable in radio waves.

  16. Mapping the universe in three dimensions

    PubMed Central

    Haynes, Martha P.

    1996-01-01

    The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin. PMID:11607714

  17. Mapping the universe in three dimensions.

    PubMed

    Haynes, M P

    1996-12-10

    The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble's law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.

  18. Cloud-based MOTIFSIM: Detecting Similarity in Large DNA Motif Data Sets.

    PubMed

    Tran, Ngoc Tam L; Huang, Chun-Hsi

    2017-05-01

    We developed the cloud-based MOTIFSIM on Amazon Web Services (AWS) cloud. The tool is an extended version from our web-based tool version 2.0, which was developed based on a novel algorithm for detecting similarity in multiple DNA motif data sets. This cloud-based version further allows researchers to exploit the computing resources available from AWS to detect similarity in multiple large-scale DNA motif data sets resulting from the next-generation sequencing technology. The tool is highly scalable with expandable AWS.

  19. Bathymetric comparisons adjacent to the Louisiana barrier islands: Processes of large-scale change

    USGS Publications Warehouse

    List, J.H.; Jaffe, B.E.; Sallenger, A.H.; Hansen, M.E.

    1997-01-01

    This paper summarizes the results of a comparative bathymetric study encompassing 150 km of the Louisiana barrier-island coast. Bathymetric data surrounding the islands and extending to 12 m water depth were processed from three survey periods: the 1880s, the 1930s, and the 1980s. Digital comparisons between surveys show large-scale, coherent patterns of sea-floor erosion and accretion related to the rapid erosion and disintegration of the islands. Analysis of the sea-floor data reveals two primary processes driving this change: massive longshore transport, in the littoral zone and at shoreface depths; and increased sediment storage in ebb-tidal deltas. Relative sea-level rise, although extraordinarily high in the study area, is shown to be an indirect factor in causing the area's rapid shoreline retreat rates.

  20. Validating Bayesian truth serum in large-scale online human experiments.

    PubMed

    Frank, Morgan R; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon's Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the "honest" distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where "honest" answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers.

  1. Validating Bayesian truth serum in large-scale online human experiments

    PubMed Central

    Frank, Morgan R.; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method’s mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon’s Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the “honest” distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where “honest” answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers. PMID:28494000

  2. Automatic Scoring of Paper-and-Pencil Figural Responses. Research Report.

    ERIC Educational Resources Information Center

    Martinez, Michael E.; And Others

    Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…

  3. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    ERIC Educational Resources Information Center

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  4. Comparing Learner Community Behavior in Multiple Presentations of a Massive Open Online Course

    ERIC Educational Resources Information Center

    Gallagher, Silvia Elena; Savage, Timothy

    2015-01-01

    Massive Online Open Courses (MOOCs) can create large scale communities of learners who collaborate, interact and discuss learning materials and activities. MOOCs are often delivered multiple times with similar content to different cohorts of learners. However, research into the differences of learner communication, behavior and expectation between…

  5. Comparing Learner Community Behavior in Multiple Presentations of a Massive Open Online Course

    ERIC Educational Resources Information Center

    Gallagher, Silvia Elena; Savage, Timothy

    2016-01-01

    Massive Online Open Courses (MOOCs) can create large scale communities of learners who collaborate, interact and discuss learning materials and activities. MOOCs are often delivered multiple times with similar content to different cohorts of learners. However, research into the differences of learner communication, behavior and expectation between…

  6. Coupled Multiple-Response versus Free-Response Conceptual Assessment: An Example from Upper-Division Physics

    ERIC Educational Resources Information Center

    Wilcox, Bethany R.; Pollock, Steven J.

    2014-01-01

    Free-response research-based assessments, like the Colorado Upper-division Electrostatics Diagnostic (CUE), provide rich, fine-grained information about students' reasoning. However, because of the difficulties inherent in scoring these assessments, the majority of the large-scale conceptual assessments in physics are multiple choice. To increase…

  7. Floating Data and the Problem with Illustrating Multiple Regression.

    ERIC Educational Resources Information Center

    Sachau, Daniel A.

    2000-01-01

    Discusses how to introduce basic concepts of multiple regression by creating a large-scale, three-dimensional regression model using the classroom walls and floor. Addresses teaching points that should be covered and reveals student reaction to the model. Finds that the greatest benefit of the model is the low fear, walk-through, nonmathematical…

  8. The Nature of Global Large-scale Sea Level Variability in Relation to Atmospheric Forcing: A Modeling Study

    NASA Technical Reports Server (NTRS)

    Fukumori, I.; Raghunath, R.; Fu, L. L.

    1996-01-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.

  9. Energetic basis for the molecular-scale organization of bone

    DOE PAGES

    Tao, Jinhui; Battle, Keith C.; Pan, Haihua; ...

    2014-12-24

    Here, the remarkable properties of bone derive from a highly organized arrangement of co-aligned nm-scale apatite platelets within a fibrillar collagen matrix. The origin of this arrangement is poorly understood and the crystal structures of hydroxyapatite (HAP) and the non-mineralized collagen fibrils alone do not provide an explanation. Moreover, little is known about collagen-apatite interaction energies, which should strongly influence both the molecular-scale organization and the resulting mechanical properties of the composite. We investigated collagen-mineral interactions by combining dynamic force spectroscopy (DFS) measurements of binding energies with molecular dynamics (MD) simulations of binding and AFM observations of collagen adsorption onmore » single crystals of calcium phosphate for four mineral phases of potential importance in bone formation. In all cases, we observe a strong preferential orientation of collagen binding, but comparison between the observed orientations and TEM analyses native tissues shows only calcium-deficient apatite (CDAP) provides an interface with collagen that is consistent with both. MD simulations predict preferred collagen orientations that agree with observations and results from both MD and DFS reveal large values for the binding energy due to multiple binding sites. These findings reconcile apparent contradictions inherent in a hydroxyapatite or carbonated apatite (CAP) model of bone mineral and provide an energetic rationale for the molecular scale organization of bone.« less

  10. Multiscale patterns and drivers of arbuscular mycorrhizal fungal communities in the roots and root-associated soil of a wild perennial herb.

    PubMed

    Rasmussen, Pil U; Hugerth, Luisa W; Blanchet, F Guillaume; Andersson, Anders F; Lindahl, Björn D; Tack, Ayco J M

    2018-03-24

    Arbuscular mycorrhizal (AM) fungi form diverse communities and are known to influence above-ground community dynamics and biodiversity. However, the multiscale patterns and drivers of AM fungal composition and diversity are still poorly understood. We sequenced DNA markers from roots and root-associated soil from Plantago lanceolata plants collected across multiple spatial scales to allow comparison of AM fungal communities among neighbouring plants, plant subpopulations, nearby plant populations, and regions. We also measured soil nutrients, temperature, humidity, and community composition of neighbouring plants and nonAM root-associated fungi. AM fungal communities were already highly dissimilar among neighbouring plants (c. 30 cm apart), albeit with a high variation in the degree of similarity at this small spatial scale. AM fungal communities were increasingly, and more consistently, dissimilar at larger spatial scales. Spatial structure and environmental drivers explained a similar percentage of the variation, from 7% to 25%. A large fraction of the variation remained unexplained, which may be a result of unmeasured environmental variables, species interactions and stochastic processes. We conclude that AM fungal communities are highly variable among nearby plants. AM fungi may therefore play a major role in maintaining small-scale variation in community dynamics and biodiversity. © 2018 The Authors New Phytologist © 2018 New Phytologist Trust.

  11. Energetic basis for the molecular-scale organization of bone.

    PubMed

    Tao, Jinhui; Battle, Keith C; Pan, Haihua; Salter, E Alan; Chien, Yung-Ching; Wierzbicki, Andrzej; De Yoreo, James J

    2015-01-13

    The remarkable properties of bone derive from a highly organized arrangement of coaligned nanometer-scale apatite platelets within a fibrillar collagen matrix. The origin of this arrangement is poorly understood and the crystal structures of hydroxyapatite (HAP) and the nonmineralized collagen fibrils alone do not provide an explanation. Moreover, little is known about collagen-apatite interaction energies, which should strongly influence both the molecular-scale organization and the resulting mechanical properties of the composite. We investigated collagen-mineral interactions by combining dynamic force spectroscopy (DFS) measurements of binding energies with molecular dynamics (MD) simulations of binding and atomic force microscopy (AFM) observations of collagen adsorption on single crystals of calcium phosphate for four mineral phases of potential importance in bone formation. In all cases, we observe a strong preferential orientation of collagen binding, but comparison between the observed orientations and transmission electron microscopy (TEM) analyses of native tissues shows that only calcium-deficient apatite (CDAP) provides an interface with collagen that is consistent with both. MD simulations predict preferred collagen orientations that agree with observations, and results from both MD and DFS reveal large values for the binding energy due to multiple binding sites. These findings reconcile apparent contradictions inherent in a hydroxyapatite or carbonated apatite (CAP) model of bone mineral and provide an energetic rationale for the molecular-scale organization of bone.

  12. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantry, Sonny; Petriello, Frank

    We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less

  13. EPIC-Simulated and MODIS-Derived Leaf Area Index (LAI) Comparisons Across mMltiple Spatial Scales RSAD Oral Poster based session

    EPA Science Inventory

    Leaf Area Index (LAI) is an important parameter in assessing vegetation structure for characterizing forest canopies over large areas at broad spatial scales using satellite remote sensing data. However, satellite-derived LAI products can be limited by obstructed atmospheric cond...

  14. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  15. Culture rather than genes provides greater scope for the evolution of large-scale human prosociality

    PubMed Central

    Bell, Adrian V.; Richerson, Peter J.; McElreath, Richard

    2009-01-01

    Whether competition among large groups played an important role in human social evolution is dependent on how variation, whether cultural or genetic, is maintained between groups. Comparisons between genetic and cultural differentiation between neighboring groups show how natural selection on large groups is more plausible on cultural rather than genetic variation. PMID:19822753

  16. Automated Topographic Change Detection via Dem Differencing at Large Scales Using The Arcticdem Database

    NASA Astrophysics Data System (ADS)

    Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.

    2016-12-01

    In the last decade, high resolution satellite imagery has become an increasingly accessible tool for geoscientists to quantify changes in the Arctic land surface due to geophysical, ecological and anthropomorphic processes. However, the trade off between spatial coverage and spatial-temporal resolution has limited detailed, process-level change detection over large (i.e. continental) scales. The ArcticDEM project utilized over 300,000 Worldview image pairs to produce a nearly 100% coverage elevation model (above 60°N) offering the first polar, high spatial - high resolution (2-8m by region) dataset, often with multiple repeats in areas of particular interest to geo-scientists. A dataset of this size (nearly 250 TB) offers endless new avenues of scientific inquiry, but quickly becomes unmanageable computationally and logistically for the computing resources available to the average scientist. Here we present TopoDiff, a framework for a generalized. automated workflow that requires minimal input from the end user about a study site, and utilizes cloud computing resources to provide a temporally sorted and differenced dataset, ready for geostatistical analysis. This hands-off approach allows the end user to focus on the science, without having to manage thousands of files, or petabytes of data. At the same time, TopoDiff provides a consistent and accurate workflow for image sorting, selection, and co-registration enabling cross-comparisons between research projects.

  17. Improved automatic estimation of winds at the cloud top of Venus using superposition of cross-correlation surfaces

    NASA Astrophysics Data System (ADS)

    Ikegawa, Shinichi; Horinouchi, Takeshi

    2016-06-01

    Accurate wind observation is a key to study atmospheric dynamics. A new automated cloud tracking method for the dayside of Venus is proposed and evaluated by using the ultraviolet images obtained by the Venus Monitoring Camera onboard the Venus Express orbiter. It uses multiple images obtained successively over a few hours. Cross-correlations are computed from the pair combinations of the images and are superposed to identify cloud advection. It is shown that the superposition improves the accuracy of velocity estimation and significantly reduces false pattern matches that cause large errors. Two methods to evaluate the accuracy of each of the obtained cloud motion vectors are proposed. One relies on the confidence bounds of cross-correlation with consideration of anisotropic cloud morphology. The other relies on the comparison of two independent estimations obtained by separating the successive images into two groups. The two evaluations can be combined to screen the results. It is shown that the accuracy of the screened vectors are very high to the equatorward of 30 degree, while it is relatively low at higher latitudes. Analysis of them supports the previously reported existence of day-to-day large-scale variability at the cloud deck of Venus, and it further suggests smaller-scale features. The product of this study is expected to advance the dynamics of venusian atmosphere.

  18. Study of Multiple Scale Physics of Magnetic Reconnection on the FLARE (Facility for Laboratory Reconnection Experiments)

    NASA Astrophysics Data System (ADS)

    Ji, H.; Bhattacharjee, A.; Prager, S.; Daughton, W. S.; Bale, S. D.; Carter, T. A.; Crocker, N.; Drake, J. F.; Egedal, J.; Sarff, J.; Wallace, J.; Chen, Y.; Cutler, R.; Fox, W. R., II; Heitzenroeder, P.; Kalish, M.; Jara-Almonte, J.; Myers, C. E.; Ren, Y.; Yamada, M.; Yoo, J.

    2015-12-01

    The FLARE device (flare.pppl.gov) is a new intermediate-scale plasma experiment under construction at Princeton to study magnetic reconnection in regimes directly relevant to space, solar and astrophysical plasmas. The existing small-scale experiments have been focusing on the single X-line reconnection process either with small effective sizes or at low Lundquist numbers, but both of which are typically very large in natural plasmas. The configuration of the FLARE device is designed to provide experimental access to the new regimes involving multiple X-lines, as guided by a reconnection "phase diagram" [Ji & Daughton, PoP (2011)]. Most of major components of the FLARE device have been designed and are under construction. The device will be assembled and installed in 2016, followed by commissioning and operation in 2017. The planned research on FLARE as a user facility will be discussed on topics including the multiple scale nature of magnetic reconnection from global fluid scales to ion and electron kinetic scales. Results from scoping simulations based on particle and fluid codes and possible comparative research with space measurements will be presented.

  19. Toward an International Comparison of Economic and Educational Mobility: Recent Findings from the Japan Child Panel Survey

    ERIC Educational Resources Information Center

    Akabayashi, Hideo; Nakamura, Ryosuke; Naoi, Michio; Shikishima, Chizuru

    2016-01-01

    In the past decades, income inequality has risen in most developed countries. There is growing interest among economists in international comparisons of economic and educational mobility. This is aided by the availability of internationally comparable, large-scale data. The present paper aims to make three contributions. First, we introduce the…

  20. Cover estimations using object-based image analysis rule sets developed across multiple scales in pinyon-juniper woodlands

    USDA-ARS?s Scientific Manuscript database

    Numerous studies have been conducted that evaluate the utility of remote sensing for monitoring and assessing vegetation and ground cover to support land management decisions and complement ground-measurements. However, few land cover comparisons have been made using high-resolution imagery and obj...

  1. Comparison of Habitat-Specific Nutrient Removal and Release in Pacific NW Salt Marshes at Multiple Spatial Scales

    EPA Science Inventory

    Wetlands can be sources, sinks and transformers of nutrients, although it is their role in nutrient removal that is valued as a water purification ecosystem service. In order to quantify that service for any wetland, it is important to understand the drivers of nutrient removal w...

  2. Are bark beetle outbreaks less synchronous than forest Lepidoptera outbreaks?

    Treesearch

    Bjorn Okland; Andrew M. Liebhold; Ottar N. Bjornstad; Nadir Erbilgin; Paal Krokene; Paal Krokene

    2005-01-01

    Comparisons of intraspecific spatial synchrony across multiple epidemic insect species can be useful for generating hypotheses about major determinants of population patterns at larger scales. The present study compares patterns of spatial synchrony in outbreaks of six epidemic bark beetle species in North America and Europe. Spatial synchrony among populations of the...

  3. Corruption in Higher Education: Conceptual Approaches and Measurement Techniques

    ERIC Educational Resources Information Center

    Osipian, Ararat L.

    2007-01-01

    Corruption is a complex and multifaceted phenomenon. Forms of corruption are multiple. Measuring corruption is necessary not only for getting ideas about the scale and scope of the problem, but for making simple comparisons between the countries and conducting comparative analysis of corruption. While the total impact of corruption is indeed…

  4. Students' Confidence in Their Performance Judgements: A Comparison of Different Response Scales

    ERIC Educational Resources Information Center

    Händel, Marion; Fritzsche, Eva Susanne

    2015-01-01

    We report results of two studies on metacognitive accuracy with undergraduate education students. Participating students were asked to judge their personal performance in a multiple-choice exam as well as to state their confidence in their performance judgement (second-order judgement [SOJ]). In each study, we compared four conditions that…

  5. A genetically stable rooting protocol for propagating a threatened medicinal plant—Celastrus paniculatus

    PubMed Central

    Phulwaria, Mahendra; Rai, Manoj K.; Patel, Ashok Kumar; Kataria, Vinod; Shekhawat, N. S.

    2012-01-01

    Celastrus paniculatus, belonging to the family Celastraceae, is an important medicinal plant of India. Owing to the ever-increasing demand from the pharmaceutical industry, the species is being overexploited, thereby threatening its stock in the wild. Poor seed viability coupled with low germination restricts its propagation through sexual means. Thus, alternative approaches such as in vitro techniques are highly desirable for large-scale propagation of this medicinally important plant. Nodal segments, obtained from a 12-year-old mature plant, were used as explants for multiple shoot induction. Shoot multiplication was achieved by repeated transfer of mother explants and subculturing of in vitro produced shoot clumps on Murashige and Skoog's (MS) medium supplemented with various concentrations of 6-benzylaminopurine (BAP) alone or in combination with auxin (indole-3-acetic acid (IAA) or α-naphthalene acetic acid (NAA)). The maximum number of shoots (47.75 ± 2.58) was observed on MS medium supplemented with BAP (0.5 mg L−1) and IAA (0.1 mg L−1). In vitro raised shoots were rooted under ex vitro conditions after treating them with indole-3-butyric acid (300 mg L−1) for 3 min. Over 95 % of plantlets acclimatized successfully. The genetic fidelity of the regenerated plants was assessed using random amplified polymorphic DNA. No polymorphism was detected in regenerated plants and the mother plant, revealing the genetic fidelity of the in vitro raised plantlets. The protocol discussed could be effectively employed for large-scale multiplication of C. paniculatus. Its commercial application could be realized for the large-scale multiplication and supply to the State Forest Department.

  6. Fine-scale characteristics of interplanetary sector

    NASA Technical Reports Server (NTRS)

    Behannon, K. W.; Neubauer, F. M.; Barnstoff, H.

    1980-01-01

    The structure of the interplanetary sector boundaries observed by Helios 1 within sector transition regions was studied. Such regions consist of intermediate (nonspiral) average field orientations in some cases, as well as a number of large angle directional discontinuities (DD's) on the fine scale (time scales 1 hour). Such DD's are found to be more similar to tangential than rotational discontinuities, to be oriented on average more nearly perpendicular than parallel to the ecliptic plane to be accompanied usually by a large dip ( 80%) in B and, with a most probable thickness of 3 x 10 to the 4th power km, significantly thicker previously studied. It is hypothesized that the observed structures represent multiple traversals of the global heliospheric current sheet due to local fluctuations in the position of the sheet. There is evidence that such fluctuations are sometimes produced by wavelike motions or surface corrugations of scale length 0.05 - 0.1 AU superimposed on the large scale structure.

  7. Homogenization techniques for population dynamics in strongly heterogeneous landscapes.

    PubMed

    Yurk, Brian P; Cobbold, Christina A

    2018-12-01

    An important problem in spatial ecology is to understand how population-scale patterns emerge from individual-level birth, death, and movement processes. These processes, which depend on local landscape characteristics, vary spatially and may exhibit sharp transitions through behavioural responses to habitat edges, leading to discontinuous population densities. Such systems can be modelled using reaction-diffusion equations with interface conditions that capture local behaviour at patch boundaries. In this work we develop a novel homogenization technique to approximate the large-scale dynamics of the system. We illustrate our approach, which also generalizes to multiple species, with an example of logistic growth within a periodic environment. We find that population persistence and the large-scale population carrying capacity is influenced by patch residence times that depend on patch preference, as well as movement rates in adjacent patches. The forms of the homogenized coefficients yield key theoretical insights into how large-scale dynamics arise from the small-scale features.

  8. Matching traditional and scientific observations to detect environmental change: a discussion on Arctic terrestrial ecosystems.

    PubMed

    Huntington, Henry; Callaghan, Terry; Fox, Shari; Krupnik, Igor

    2004-11-01

    Recent environmental changes are having, and are expected to continue to have, significant impacts in the Arctic as elsewhere in the world. Detecting those changes and determining the mechanisms that cause them are far from trivial problems. The use of multiple methods of observation can increase confidence in individual observations, broaden the scope of information available about environmental change, and contribute to insights concerning mechanisms of change. In this paper, we examine the ways that using traditional ecological knowledge (TEK) together with scientific observations can achieve these objectives. A review of TEK observations in comparison with scientific observations demonstrates the promise of this approach, while also revealing several challenges to putting it into practice on a large scale. Further efforts are suggested, particularly in undertaking collaborative projects designed to produce parallel observations that can be readily compared and analyzed in greater detail than is possible in an opportunistic sample.

  9. A simple, objective analysis scheme for scatterometer data. [Seasat A satellite observation of wind over ocean

    NASA Technical Reports Server (NTRS)

    Levy, G.; Brown, R. A.

    1986-01-01

    A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.

  10. Large-Scale Fabrication of Silicon Nanowires for Solar Energy Applications.

    PubMed

    Zhang, Bingchang; Jie, Jiansheng; Zhang, Xiujuan; Ou, Xuemei; Zhang, Xiaohong

    2017-10-11

    The development of silicon (Si) materials during past decades has boosted up the prosperity of the modern semiconductor industry. In comparison with the bulk-Si materials, Si nanowires (SiNWs) possess superior structural, optical, and electrical properties and have attracted increasing attention in solar energy applications. To achieve the practical applications of SiNWs, both large-scale synthesis of SiNWs at low cost and rational design of energy conversion devices with high efficiency are the prerequisite. This review focuses on the recent progresses in large-scale production of SiNWs, as well as the construction of high-efficiency SiNW-based solar energy conversion devices, including photovoltaic devices and photo-electrochemical cells. Finally, the outlook and challenges in this emerging field are presented.

  11. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities

    PubMed Central

    2010-01-01

    Background Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities. PMID:20482787

  12. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities.

    PubMed

    Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E

    2010-05-18

    Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.

  13. Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem

    USGS Publications Warehouse

    Schneider, Davod C.; Piatt, John F.

    1986-01-01

    The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.

  14. Common source-multiple load vs. separate source-individual load photovoltaic system

    NASA Technical Reports Server (NTRS)

    Appelbaum, Joseph

    1989-01-01

    A comparison of system performance is made for two possible system setups: (1) individual loads powered by separate solar cell sources; and (2) multiple loads powered by a common solar cell source. A proof for resistive loads is given that shows the advantage of a common source over a separate source photovoltaic system for a large range of loads. For identical loads, both systems perform the same.

  15. An Eddy-Diffusivity Mass-flux (EDMF) closure for the unified representation of cloud and convective processes

    NASA Astrophysics Data System (ADS)

    Tan, Z.; Schneider, T.; Teixeira, J.; Lam, R.; Pressel, K. G.

    2014-12-01

    Sub-grid scale (SGS) closures in current climate models are usually decomposed into several largely independent parameterization schemes for different cloud and convective processes, such as boundary layer turbulence, shallow convection, and deep convection. These separate parameterizations usually do not converge as the resolution is increased or as physical limits are taken. This makes it difficult to represent the interactions and smooth transition among different cloud and convective regimes. Here we present an eddy-diffusivity mass-flux (EDMF) closure that represents all sub-grid scale turbulent, convective, and cloud processes in a unified parameterization scheme. The buoyant updrafts and precipitative downdrafts are parameterized with a prognostic multiple-plume mass-flux (MF) scheme. The prognostic term for the mass flux is kept so that the life cycles of convective plumes are better represented. The interaction between updrafts and downdrafts are parameterized with the buoyancy-sorting model. The turbulent mixing outside plumes is represented by eddy diffusion, in which eddy diffusivity (ED) is determined from a turbulent kinetic energy (TKE) calculated from a TKE balance that couples the environment with updrafts and downdrafts. Similarly, tracer variances are decomposed consistently between updrafts, downdrafts and the environment. The closure is internally coupled with a probabilistic cloud scheme and a simple precipitation scheme. We have also developed a relatively simple two-stream radiative scheme that includes the longwave (LW) and shortwave (SW) effects of clouds, and the LW effect of water vapor. We have tested this closure in a single-column model for various regimes spanning stratocumulus, shallow cumulus, and deep convection. The model is also run towards statistical equilibrium with climatologically relevant large-scale forcings. These model tests are validated against large-eddy simulation (LES) with the same forcings. The comparison of results verifies the capacity of this closure to realistically represent different cloud and convective processes. Implementation of the closure in an idealized GCM allows us to study cloud feedbacks to climate change and to study the interactions between clouds, convections, and the large-scale circulation.

  16. Unifying Inference of Meso-Scale Structures in Networks.

    PubMed

    Tunç, Birkan; Verma, Ragini

    2015-01-01

    Networks are among the most prevalent formal representations in scientific studies, employed to depict interactions between objects such as molecules, neuronal clusters, or social groups. Studies performed at meso-scale that involve grouping of objects based on their distinctive interaction patterns form one of the main lines of investigation in network science. In a social network, for instance, meso-scale structures can correspond to isolated social groupings or groups of individuals that serve as a communication core. Currently, the research on different meso-scale structures such as community and core-periphery structures has been conducted via independent approaches, which precludes the possibility of an algorithmic design that can handle multiple meso-scale structures and deciding which structure explains the observed data better. In this study, we propose a unified formulation for the algorithmic detection and analysis of different meso-scale structures. This facilitates the investigation of hybrid structures that capture the interplay between multiple meso-scale structures and statistical comparison of competing structures, all of which have been hitherto unavailable. We demonstrate the applicability of the methodology in analyzing the human brain network, by determining the dominant organizational structure (communities) of the brain, as well as its auxiliary characteristics (core-periphery).

  17. Large scale modulation of high frequency acoustic waves in periodic porous media.

    PubMed

    Boutin, Claude; Rallu, Antoine; Hans, Stephane

    2012-12-01

    This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.

  18. A comparative analysis of rawinsonde and NIMBUS 6 and TIROS N satellite profile data

    NASA Technical Reports Server (NTRS)

    Scoggins, J. R.; Carle, W. E.; Knight, K.; Moyer, V.; Cheng, N. M.

    1981-01-01

    Comparisons are made between rawinsonde and satellite profiles in seven areas for a wide range of surface and weather conditions. Variables considered include temperature, dewpoint temperature, thickness, precipitable water, lapse rate of temperature, stability, geopotential height, mixing ratio, wind direction, wind speed, and kinematic parameters, including vorticity and the advection of vorticity and temperature. In addition, comparisons are made in the form of cross sections and synoptic fields for selected variables. Sounding data from the NIMBUS 6 and TIROS N satellites were used. Geostrophic wind computed from smoothed geopotential heights provided large scale flow patterns that agreed well with the rawinsonde wind fields. Surface wind patterns as well as magnitudes computed by use of the log law to extrapolate wind to a height of 10 m agreed with observations. Results of this study demonstrate rather conclusively that satellite profile data can be used to determine characteristics of large scale systems but that small scale features, such as frontal zones, cannot yet be resolved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucero, D. M.; Young, L. M.

    We present an analysis of new and archival Very Large Array H I observations of a sample of 11 early-type galaxies rich in CO, with detailed comparisons of CO and H I distributions and kinematics. The early-type sample consists of both lenticular and elliptical galaxies in a variety of environments. A range of morphologies and environments were selected in order to give a broader understanding of the origins, distribution, and fate of the cold gas in early-type galaxies. Six of the eleven galaxies in the sample are detected in both H I and CO. The H{sub 2} to H Imore » mass ratios for this sample range from 0.2 to 120. The H I morphologies of the sample are consistent with that of recent H I surveys of early-type galaxies, which also find a mix of H I morphologies and masses, low H I peak surface densities, and a lack of H I in early-type galaxies that reside in high-density environments. The HI-detected galaxies have a wide range of H I masses (1.4 Multiplication-Sign 10{sup 6} to 1.1 Multiplication-Sign 10{sup 10} M{sub Sun }). There does not appear to be any correlation between the H I mass and morphology (E versus S0). When H I is detected, it is centrally peaked-there are no central kiloparsec-scale central H I depressions like those observed for early-type spiral galaxies at similar spatial resolutions and scales. A kinematic comparison between the H I and CO indicates that both cold gas components share the same origin. The primary goal of this and a series of future papers is to better understand the relationship between the atomic and molecular gas in early-type galaxies, and to compare the observed relationships with those of spiral galaxies where this relationship has been studied in depth.« less

  20. Impact of pretreatment with antidepressants on the efficacy of duloxetine in terms of mood symptoms and functioning: an analysis of 15 pooled major depressive disorder studies.

    PubMed

    Barros, Bruno R; Schacht, Alexander; Happich, Michael; Televantou, Foula; Berggren, Lovisa; Walker, Daniel J; Dueñas, Hector J

    2014-01-01

    This post hoc analysis aimed to determine whether patients with major depressive disorder (MDD) in duloxetine trials who were antidepressant naive or who were previously exposed to antidepressants exhibited differences in efficacy and functioning. Data were pooled from 15 double-blind, placebo- and/or active-controlled duloxetine trials of adult patients with MDD conducted by Eli Lilly and Company. The individual studies took place between March 2000 and November 2009. Data were analyzed using 4 pretreatment subgroups: first-episode never treated, multiple-episode never treated, treated previously only with selective serotonin reuptake inhibitors (SSRIs), and previously treated with antidepressants other than just SSRIs. Measures included the 17-item Hamilton Depression Rating Scale (HDRS-17) total and somatic symptom subscale scores, Montgomery-Asberg Depression Rating Scale (MADRS) total score, and Sheehan Disability Scale total score. Response rates (50% and 30%) were based on the HDRS-17 total score and remission rates on either the HDRS-17 or MADRS total score. Response and remission rates were significantly greater (P < .05 in 11 of 12 comparisons) for duloxetine versus placebo in the 4 subgroups. A trend of greater response and remission occurred for first-episode versus multiple-episode patients; both groups were generally higher than the antidepressant-treated groups. Mean changes in efficacy measures were mostly significantly greater (P < .05 in 13 of 16 comparisons) for duloxetine versus placebo within each pretreatment subgroup, with some (P < .05 in 2 of 24 comparisons) significant interaction effects between subgroups on HDRS-17 total and somatic symptoms scores. Duloxetine was generally superior to placebo on response and remission rates and in mean change on efficacy measures. Response and remission rates were numerically greater for first-episode versus multiple-episode and drug-treated patients. Mean change differences on efficacy measures among the 4 subgroups were inconsistent. Duloxetine showed a similar therapeutic effect independent of episode frequency and antidepressant pretreatment.

  1. Evaluation of an index of biotic integrity approach to assess fish assemblage condition in Western USA streams and rivers at varying spatial scales

    EPA Science Inventory

    Consistent assessments of biological condition are needed across multiple ecoregions to provide a greater understanding of the spatial extent of environmental degradation. However, consistent assessments at large geographic scales are often hampered by lack of uniformity in data ...

  2. Local-scale invasion pathways and small founder numbers in introduced Sacramento pikeminnow (Ptychocheilus grandis)

    Treesearch

    Andrew P. Kinziger; Rodney J. Nakamoto; Bret C. Harvey

    2014-01-01

    Given the general pattern of invasions with severe ecological consequences commonly resulting from multiple introductions of large numbers of individuals on the intercontinental scale, we explored an example of a highly successful, ecologically significant invader introduced over a short distance, possibly via minimal propagule pressure. The Sacramento pikeminnow (

  3. Taking the pulse of a continent: Expanding site-based research infrastructure for regional- to continental-scale ecology

    USDA-ARS?s Scientific Manuscript database

    Many of the most dramatic and surprising effects of global change on ecological systems will occur across large spatial extents, from regions to continents. Multiple ecosystem types will be impacted across a range of interacting spatial and temporal scales. The ability of ecologists to understand an...

  4. MONITORING COASTAL RESOURCES AT MULTIPLE SPATIAL AND TEMPORAL SCALES: LESSONS FROM EMAP 2001 EMAP SYMPOSIUM, APRIL 24-27, PENSACOLA BEACH, FL

    EPA Science Inventory

    In 1990, EMAP's Coastal Monitoring Program conducted its first regional sampling program in the Virginian Province. This first effort focused only at large spatial scales (regional) with some stratification to examine estuarine types. In the ensuing decade, EMAP-Coastal has condu...

  5. Making Visible Teacher Reports of Their Teaching Experiences: The Early Childhood Teacher Experiences Scale

    ERIC Educational Resources Information Center

    Fantuzzo, John; Perlman, Staci; Sproul, Faith; Minney, Ashley; Perry, Marlo A.; Li, Feifei

    2012-01-01

    The study developed multiple independent scales of early childhood teacher experiences (ECTES). ECTES was co-constructed with preschool, kindergarten, and first grade teachers in a large urban school district. Demographic, ECTES, and teaching practices data were collected from 584 teachers. Factor analyses documented three teacher experience…

  6. Relative Costs of Various Types of Assessments.

    ERIC Educational Resources Information Center

    Wheeler, Patricia H.

    Issues of the relative costs of multiple choice tests and alternative types of assessment are explored. Before alternative assessments in large-scale or small-scale programs are used, attention must be given to cost considerations and the resources required to develop and implement the assessment. Major categories of cost to be considered are…

  7. Large-scale data analysis of power grid resilience across multiple US service regions

    NASA Astrophysics Data System (ADS)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  8. Cooperation without culture? The null effect of generalized trust on intentional homicide: a cross-national panel analysis, 1995-2009.

    PubMed

    Robbins, Blaine

    2013-01-01

    Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation.

  9. A visualization tool to support decision making in environmental and biological planning

    USGS Publications Warehouse

    Romañach, Stephanie S.; McKelvy, James M.; Conzelmann, Craig; Suir, Kevin J.

    2014-01-01

    Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States.

  10. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  11. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.

    1996-12-17

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.

  12. Method for identifying subsurface fluid migration and drainage pathways in and among oil and gas reservoirs using 3-D and 4-D seismic imaging

    DOEpatents

    Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei

    1996-01-01

    The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.

  13. PhosphOrtholog: a web-based tool for cross-species mapping of orthologous protein post-translational modifications.

    PubMed

    Chaudhuri, Rima; Sadrieh, Arash; Hoffman, Nolan J; Parker, Benjamin L; Humphrey, Sean J; Stöckli, Jacqueline; Hill, Adam P; James, David E; Yang, Jean Yee Hwa

    2015-08-19

    Most biological processes are influenced by protein post-translational modifications (PTMs). Identifying novel PTM sites in different organisms, including humans and model organisms, has expedited our understanding of key signal transduction mechanisms. However, with increasing availability of deep, quantitative datasets in diverse species, there is a growing need for tools to facilitate cross-species comparison of PTM data. This is particularly important because functionally important modification sites are more likely to be evolutionarily conserved; yet cross-species comparison of PTMs is difficult since they often lie in structurally disordered protein domains. Current tools that address this can only map known PTMs between species based on known orthologous phosphosites, and do not enable the cross-species mapping of newly identified modification sites. Here, we addressed this by developing a web-based software tool, PhosphOrtholog ( www.phosphortholog.com ) that accurately maps protein modification sites between different species. This facilitates the comparison of datasets derived from multiple species, and should be a valuable tool for the proteomics community. Here we describe PhosphOrtholog, a web-based application for mapping known and novel orthologous PTM sites from experimental data obtained from different species. PhosphOrtholog is the only generic and automated tool that enables cross-species comparison of large-scale PTM datasets without relying on existing PTM databases. This is achieved through pairwise sequence alignment of orthologous protein residues. To demonstrate its utility we apply it to two sets of human and rat muscle phosphoproteomes generated following insulin and exercise stimulation, respectively, and one publicly available mouse phosphoproteome following cellular stress revealing high mapping and coverage efficiency. Although coverage statistics are dataset dependent, PhosphOrtholog increased the number of cross-species mapped sites in all our example data sets by more than double when compared to those recovered using existing resources such as PhosphoSitePlus. PhosphOrtholog is the first tool that enables mapping of thousands of novel and known protein phosphorylation sites across species, accessible through an easy-to-use web interface. Identification of conserved PTMs across species from large-scale experimental data increases our knowledgebase of functional PTM sites. Moreover, PhosphOrtholog is generic being applicable to other PTM datasets such as acetylation, ubiquitination and methylation.

  14. Developing a Multiplexed Quantitative Cross-Linking Mass Spectrometry Platform for Comparative Structural Analysis of Protein Complexes.

    PubMed

    Yu, Clinton; Huszagh, Alexander; Viner, Rosa; Novitsky, Eric J; Rychnovsky, Scott D; Huang, Lan

    2016-10-18

    Cross-linking mass spectrometry (XL-MS) represents a recently popularized hybrid methodology for defining protein-protein interactions (PPIs) and analyzing structures of large protein assemblies. In particular, XL-MS strategies have been demonstrated to be effective in elucidating molecular details of PPIs at the peptide resolution, providing a complementary set of structural data that can be utilized to refine existing complex structures or direct de novo modeling of unknown protein structures. To study structural and interaction dynamics of protein complexes, quantitative cross-linking mass spectrometry (QXL-MS) strategies based on isotope-labeled cross-linkers have been developed. Although successful, these approaches are mostly limited to pairwise comparisons. In order to establish a robust workflow enabling comparative analysis of multiple cross-linked samples simultaneously, we have developed a multiplexed QXL-MS strategy, namely, QMIX (Quantitation of Multiplexed, Isobaric-labeled cross (X)-linked peptides) by integrating MS-cleavable cross-linkers with isobaric labeling reagents. This study has established a new analytical platform for quantitative analysis of cross-linked peptides, which can be directly applied for multiplexed comparisons of the conformational dynamics of protein complexes and PPIs at the proteome scale in future studies.

  15. Comparison of Three Soil Moisture Sensor Types Under Field Conditions Based on the Marena, Oklahoma, In Situ Sensor Testbed (MOISST)

    NASA Astrophysics Data System (ADS)

    Zhang, N.; Quiring, S. M.; Ochsner, T. E.

    2017-12-01

    Each soil moisture monitoring network commonly adopts different sensor technologies. This results in different measurement units, depths and impedes large-scale soil moisture applications that seek to integrate data from multiple networks. Therefore, a comprehensive comparison of different sensors to identify the best approach for integrating and homogenizing measurements from different sensors is required. This study compares three commonly used sensors, including Stevens Water Hydra Probes, Campbell Scientific CS616 TDR and CS 229-L heat dissipation sensors based on data from May 2010 to December 2012 from the Marena, Oklahoma, In Situ Sensor Testbed (MOISST). All sensors are installed at common depths of 5, 10, 20, 50, 100 cm. The results reveal that the differences between the three sensors tends to increase with depth. The CDF plots showed CS 229 is most sensitive to moisture variation in dry condition and most easily saturated in wet condition, followed by Hydra probe and CS616. Our results show that calculating percentiles is a good normalization method for standardizing measurements from different sensors. Our preliminary results demonstrate that CDF matching can be used to convert measurements from one sensor to another.

  16. A Dynamic Evaluation Of A Model And An Estimate Of The Air Quality And Regional Climate Impacts Of Enhanced Solar Power Generation

    NASA Astrophysics Data System (ADS)

    Millstein, D.; Brown, N. J.; Zhai, P.; Menon, S.

    2012-12-01

    We use the WRF/Chem model (Weather Research and Forecasting model with chemistry) and pollutant emissions based on the EPA National Emission Inventories from 2005 and 2008 to model regional climate and air quality over the continental United States. Additionally, 2030 emission scenarios are developed to investigate the effects of future enhancements to solar power generation. Modeling covered 6 summer and 6 winter weeks each year. We model feedback between aerosols and meteorology and thus capture direct and indirect aerosol effects. The grid resolution is 25 km and includes no nesting. Between 2005 and 2008 significant emission reductions were reported in the National Emission Inventory. The 2008 weekday emissions over the continental U.S. of SO2 and NO were reduced from 2005 values by 28% and 16%, respectively. Emission reductions of this magnitude are similar in scale to the potential emission reductions from various energy policy initiatives. By evaluating modeled and observed air quality changes from 2005 to 2008, we analyze how well the model represents the effects of historical emission changes. We also gain insight into how well the model might predict the effects of future emission changes. In addition to direct comparisons of model outputs to ground and satellite observations, we compare observed differences between 2005 and 2008 to corresponding modeled differences. Modeling was extended to future scenarios (2030) to simulate air quality and regional climate effects of large-scale adoption of solar power. The 2030-year was selected to allow time for development of solar generation infrastructure. The 2030 emission scenario was scaled, with separate factors for different economic sectors, from the 2008 National Emissions Inventory. The changes to emissions caused by the introduction of large-scale solar power (here assumed to be 10% of total energy generation) are based on results from a parallel project that used an electricity grid model applied over multiple regions across the country. The regional climate and air quality effects of future large-scale solar power adoption are analyzed in the context of uncertainty quantified by the dynamic evaluation of the historical (2005 and 2008) WRF/Chem simulations.

  17. FLARE: a New User Facility to Study Multiple-Scale Physics of Magnetic Reconnection Through in-situ Measurements

    NASA Astrophysics Data System (ADS)

    Ji, H.; Bhattacharjee, A.; Prager, S.; Daughton, W. S.; Chen, Y.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Jara-Almonte, J.; Myers, C. E.; Ren, Y.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S. E.; Drake, J. F.; Egedal, J.; Sarff, J.; Wallace, J.

    2016-12-01

    The FLARE device (Facility for Laboratory Reconnection Experiments; http://flare.pppl.gov) is a new intermediate-scale plasma experiment under construction at Princeton for the studies of magnetic reconnection in the multiple X-line regimes directly relevant to space, solar, astrophysical, and fusion plasmas, as guided by a reconnection phase diagram [Ji & Daughton, Physics of Plasmas 18, 111207 (2011)]. Most of major components either have been already fabricated or are near their completion, including the two most crucial magnets called flux cores. The hardware assembly and installation begin in this summer, followed by commissioning in 2017. Initial comprehensive set of research diagnostics will be constructed and installed also in 2017. The main diagnostics is an extensive set of magnetic probe arrays, covering multiple scales from local electron scales ( ˜ 2 mm) , to intermediate ion scales ( ˜10 cm), and global MHD scales ( ˜ 1 m). The main advantage for the magnetospheric community to use this facility is the ability to simultaneously provide in-situ measurements over all of these relevant scales. By using these laboratory data, not only the detailed spatial profiles around each reconnecting X-line are available for direct comparisons with spacecraft data, but also the global conditions and consequences of magnetic reconnection, which are often difficult to quantify in space, can be controlled or studied systematically. The planned procedures and example topics as a user facility will be discussed in details.

  18. epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.

    PubMed

    Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa

    2016-12-01

    Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  19. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    NASA Astrophysics Data System (ADS)

    George, D. L.; Iverson, R. M.

    2012-12-01

    Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.

  20. An improved model for whole genome phylogenetic analysis by Fourier transform.

    PubMed

    Yin, Changchuan; Yau, Stephen S-T

    2015-10-07

    DNA sequence similarity comparison is one of the major steps in computational phylogenetic studies. The sequence comparison of closely related DNA sequences and genomes is usually performed by multiple sequence alignments (MSA). While the MSA method is accurate for some types of sequences, it may produce incorrect results when DNA sequences undergone rearrangements as in many bacterial and viral genomes. It is also limited by its computational complexity for comparing large volumes of data. Previously, we proposed an alignment-free method that exploits the full information contents of DNA sequences by Discrete Fourier Transform (DFT), but still with some limitations. Here, we present a significantly improved method for the similarity comparison of DNA sequences by DFT. In this method, we map DNA sequences into 2-dimensional (2D) numerical sequences and then apply DFT to transform the 2D numerical sequences into frequency domain. In the 2D mapping, the nucleotide composition of a DNA sequence is a determinant factor and the 2D mapping reduces the nucleotide composition bias in distance measure, and thus improving the similarity measure of DNA sequences. To compare the DFT power spectra of DNA sequences with different lengths, we propose an improved even scaling algorithm to extend shorter DFT power spectra to the longest length of the underlying sequences. After the DFT power spectra are evenly scaled, the spectra are in the same dimensionality of the Fourier frequency space, then the Euclidean distances of full Fourier power spectra of the DNA sequences are used as the dissimilarity metrics. The improved DFT method, with increased computational performance by 2D numerical representation, can be applicable to any DNA sequences of different length ranges. We assess the accuracy of the improved DFT similarity measure in hierarchical clustering of different DNA sequences including simulated and real datasets. The method yields accurate and reliable phylogenetic trees and demonstrates that the improved DFT dissimilarity measure is an efficient and effective similarity measure of DNA sequences. Due to its high efficiency and accuracy, the proposed DFT similarity measure is successfully applied on phylogenetic analysis for individual genes and large whole bacterial genomes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Dynamic effective connectivity in cortically embedded systems of recurrently coupled synfire chains.

    PubMed

    Trengove, Chris; Diesmann, Markus; van Leeuwen, Cees

    2016-02-01

    As a candidate mechanism of neural representation, large numbers of synfire chains can efficiently be embedded in a balanced recurrent cortical network model. Here we study a model in which multiple synfire chains of variable strength are randomly coupled together to form a recurrent system. The system can be implemented both as a large-scale network of integrate-and-fire neurons and as a reduced model. The latter has binary-state pools as basic units but is otherwise isomorphic to the large-scale model, and provides an efficient tool for studying its behavior. Both the large-scale system and its reduced counterpart are able to sustain ongoing endogenous activity in the form of synfire waves, the proliferation of which is regulated by negative feedback caused by collateral noise. Within this equilibrium, diverse repertoires of ongoing activity are observed, including meta-stability and multiple steady states. These states arise in concert with an effective connectivity structure (ECS). The ECS admits a family of effective connectivity graphs (ECGs), parametrized by the mean global activity level. Of these graphs, the strongly connected components and their associated out-components account to a large extent for the observed steady states of the system. These results imply a notion of dynamic effective connectivity as governing neural computation with synfire chains, and related forms of cortical circuitry with complex topologies.

  2. Improved simulation of group averaged CO2 surface concentrations using GEOS-Chem and fluxes from VEGAS

    NASA Astrophysics Data System (ADS)

    Chen, Z. H.; Zhu, J.; Zeng, N.

    2013-01-01

    CO2 measurements have been combined with simulated CO2 distributions from a transport model in order to produce the optimal estimates of CO2 surface fluxes in inverse modeling. However one persistent problem in using model-observation comparisons for this goal relates to the issue of compatibility. Observations at a single site reflect all underlying processes of various scales that usually cannot be fully resolved by model simulations at the grid points nearest the site due to lack of spatial or temporal resolution or missing processes in models. In this article we group site observations of multiple stations according to atmospheric mixing regimes and surface characteristics. The group averaged values of CO2 concentration from model simulations and observations are used to evaluate the regional model results. Using the group averaged measurements of CO2 reduces the noise of individual stations. The difference of group averaged values between observation and modeled results reflects the uncertainties of the large scale flux in the region where the grouped stations are. We compared the group averaged values between model results with two biospheric fluxes from the model Carnegie-Ames-Stanford-Approach (CASA) and VEgetation-Global-Atmosphere-Soil (VEGAS) and observations to evaluate the regional model results. Results show that the modeling group averaged values of CO2 concentrations in all regions with fluxes from VEGAS have significant improvements for most regions. There is still large difference between two model results and observations for grouped average values in North Atlantic, Indian Ocean, and South Pacific Tropics. This implies possible large uncertainties in the fluxes there.

  3. Conjugate-Gradient Algorithms For Dynamics Of Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheid, Robert E.

    1993-01-01

    Algorithms for serial and parallel computation of forward dynamics of multiple-link robotic manipulators by conjugate-gradient method developed. Parallel algorithms have potential for speedup of computations on multiple linked, specialized processors implemented in very-large-scale integrated circuits. Such processors used to stimulate dynamics, possibly faster than in real time, for purposes of planning and control.

  4. Multi-resource and multi-scale approaches for meeting the challenge of managing multiple species

    Treesearch

    Frank R. Thompson; Deborah M. Finch; John R. Probst; Glen D. Gaines; David S. Dobkin

    1999-01-01

    The large number of Neotropical migratory bird (NTMB) species and their diverse habitat requirements create conflicts and difficulties for land managers and conservationists. We provide examples of assessments or conservation efforts that attempt to address the problem of managing for multiple NTMB species. We advocate approaches at a variety of spatial and geographic...

  5. Quantum probability, choice in large worlds, and the statistical structure of reality.

    PubMed

    Ross, Don; Ladyman, James

    2013-06-01

    Classical probability models of incentive response are inadequate in "large worlds," where the dimensions of relative risk and the dimensions of similarity in outcome comparisons typically differ. Quantum probability models for choice in large worlds may be motivated pragmatically - there is no third theory - or metaphysically: statistical processing in the brain adapts to the true scale-relative structure of the universe.

  6. Dispersion in Fractures with Ramified Dissolution Patterns

    NASA Astrophysics Data System (ADS)

    Xu, Le; Marks, Benjy; Toussaint, Renaud; Flekkøy, Eirik G.; Måløy, Knut J.

    2018-04-01

    The injection of a reactive fluid into an open fracture may modify the fracture surface locally and create a ramified structure around the injection point. This structure will have a significant impact on the dispersion of the injected fluid due to increased permeability, which will introduce large velocity fluctuations into the fluid. Here, we have injected a fluorescent tracer fluid into a transparent artificial fracture with such a ramified structure. The transparency of the model makes it possible to follow the detailed dispersion of the tracer concentration. The experiments have been compared to two dimensional (2D) computer simulations which include both convective motion and molecular diffusion. A comparison was also performed between the dispersion from an initially ramified dissolution structure and the dispersion from an initially circular region. A significant difference was seen both at small and large length scales. At large length scales, the persistence of the anisotropy of the concentration distribution far from the ramified structure is discussed with reference to some theoretical considerations and comparison with simulations.

  7. Adapting an Ant Colony Metaphor for Multi-Robot Chemical Plume Tracing

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming

    2012-01-01

    We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments. PMID:22666056

  8. Adapting an ant colony metaphor for multi-robot chemical plume tracing.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming

    2012-01-01

    We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments.

  9. Increasing morphological complexity in multiple parallel lineages of the Crustacea

    PubMed Central

    Adamowicz, Sarah J.; Purvis, Andy; Wills, Matthew A.

    2008-01-01

    The prospect of finding macroevolutionary trends and rules in the history of life is tremendously appealing, but very few pervasive trends have been found. Here, we demonstrate a parallel increase in the morphological complexity of most of the deep lineages within a major clade. We focus on the Crustacea, measuring the morphological differentiation of limbs. First, we show a clear trend of increasing complexity among 66 free-living, ordinal-level taxa from the Phanerozoic fossil record. We next demonstrate that this trend is pervasive, occurring in 10 or 11 of 12 matched-pair comparisons (across five morphological diversity indices) between extinct Paleozoic and related Recent taxa. This clearly differentiates the pattern from the effects of lineage sorting. Furthermore, newly appearing taxa tend to have had more types of limbs and a higher degree of limb differentiation than the contemporaneous average, whereas those going extinct showed higher-than-average limb redundancy. Patterns of contemporary species diversity partially reflect the paleontological trend. These results provide a rare demonstration of a large-scale and probably driven trend occurring across multiple independent lineages and influencing both the form and number of species through deep time and in the present day. PMID:18347335

  10. Sockeye: A 3D Environment for Comparative Genomics

    PubMed Central

    Montgomery, Stephen B.; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A. Gordon; Sleumer, Monica; Siddiqui, Asim S.; Jones, Steven J.M.

    2004-01-01

    Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592

  11. How many flux towers are enough? How tall is a tower tall enough? How elaborate a scaling is scaling enough?

    NASA Astrophysics Data System (ADS)

    Xu, K.; Sühring, M.; Metzger, S.; Desai, A. R.

    2017-12-01

    Most eddy covariance (EC) flux towers suffer from footprint bias. This footprint not only varies rapidly in time, but is smaller than the resolution of most earth system models, leading to a systemic scale mismatch in model-data comparison. Previous studies have suggested this problem can be mitigated (1) with multiple towers, (2) by building a taller tower with a large flux footprint, and (3) by applying advanced scaling methods. Here we ask: (1) How many flux towers are needed to sufficiently sample the flux mean and variation across an Earth system model domain? (2) How tall is tall enough for a single tower to represent the Earth system model domain? (3) Can we reduce the requirements derived from the first two questions with advanced scaling methods? We test these questions with output from large eddy simulations (LES) and application of the environmental response function (ERF) upscaling method. PALM LES (Maronga et al. 2015) was set up over a domain of 12 km x 16 km x 1.8 km at 7 m spatial resolution and produced 5 hours of output at a time step of 0.3 s. The surface Bowen ratio alternated between 0.2 and 1 among a series of 3 km wide stripe-like surface patches, with horizontal wind perpendicular to the surface heterogeneity. A total of 384 virtual towers were arranged on a regular grid across the LES domain, recording EC observations at 18 vertical levels. We use increasing height of a virtual flux tower and increasing numbers of virtual flux towers in the domain to compute energy fluxes. Initial results show a large (>25) number of towers is needed sufficiently sample the mean domain energy flux. When the ERF upscaling method was applied to the virtual towers in the LES environment, we were able to map fluxes over the domain to within 20% precision with a significantly smaller number of towers. This was achieved by relating sub-hourly turbulent fluxes to meteorological forcings and surface properties. These results demonstrate how advanced scaling techniques can decrease the number of towers, and thus experimental expense, required for domain-scaling over heterogeneous surface.

  12. Providing context: antimicrobial resistance from multiple environmental sources

    USDA-ARS?s Scientific Manuscript database

    Background: Animal agriculture has been identified as encouraging the spread of resistance due to the use of large quantities of antimicrobials for animal production purposes. When antimicrobial resistance (AMR) is reported in agricultural settings without comparison to other environments there is a...

  13. Multi-Timescale Analysis of the Spatial Representativeness of In Situ Soil Moisture Data within Satellite Footprints

    NASA Astrophysics Data System (ADS)

    Molero, B.; Leroux, D. J.; Richaume, P.; Kerr, Y. H.; Merlin, O.; Cosh, M. H.; Bindlish, R.

    2018-01-01

    We conduct a novel comprehensive investigation that seeks to prove the connection between spatial scales and timescales in surface soil moisture (SM) within the satellite footprint ( 50 km). Modeled and measured point series at Yanco and Little Washita in situ networks are first decomposed into anomalies at timescales ranging from 0.5 to 128 days, using wavelet transforms. Then, their degree of spatial representativeness is evaluated on a per-timescale basis by comparison to large spatial scale data sets (the in situ spatial average, SMOS, AMSR2, and ECMWF). Four methods are used for this: temporal stability analysis (TStab), triple collocation (TC), percentage of correlated areas (CArea), and a new proposed approach that uses wavelet-based correlations (WCor). We found that the mean of the spatial representativeness values tends to increase with the timescale but so does their dispersion. Locations exhibit poor spatial representativeness at scales below 4 days, while either very good or poor representativeness at seasonal scales. Regarding the methods, TStab cannot be applied to the anomaly series due to their multiple zero-crossings, and TC is suitable for week and month scales but not for other scales where data set cross-correlations are found low. In contrast, WCor and CArea give consistent results at all timescales. WCor is less sensitive to the spatial sampling density, so it is a robust method that can be applied to sparse networks (one station per footprint). These results are promising to improve the validation and downscaling of satellite SM series and the optimization of SM networks.

  14. Chemical Processing of Electrons and Holes.

    ERIC Educational Resources Information Center

    Anderson, Timothy J.

    1990-01-01

    Presents a synopsis of four lectures given in an elective senior-level electronic material processing course to introduce solid state electronics. Provides comparisons of a large scale chemical processing plant and an integrated circuit. (YP)

  15. A comparison of three methods for measuring local urban tree canopy cover

    Treesearch

    Kristen L. King; Dexter H. Locke

    2013-01-01

    Measurements of urban tree canopy cover are crucial for managing urban forests and required for the quantification of the benefits provided by trees. These types of data are increasingly used to secure funding and justify large-scale planting programs in urban areas. Comparisons of tree canopy measurement methods have been conducted before, but a rapidly evolving set...

  16. The coherence length of the peculiar velocity field in the universe and the large-scale galaxy correlation data

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    This study presents a method for obtaining the true rms peculiar flow in the universe on scales up to 100-120/h Mpc using APM data as an input assuming only that peculiar motions are caused by peculiar gravity. The comparison to the local (Great Attractor) flow is expected to give clear information on the density parameter, Omega, and the local bias parameter, b. The observed peculiar flows in the Great Attractor region are found to be in better agreement with the open (Omega = 0.1) universe in which light traces mass (b = 1) than with a flat (Omega = 1) universe unless the bias parameter is unrealistically large (b is not less than 4). Constraints on Omega from a comparison of the APM and PV samples are discussed.

  17. Non-linear scale interactions in a forced turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Duvvuri, Subrahmanyam; McKeon, Beverley

    2015-11-01

    A strong phase-organizing influence exerted by a single synthetic large-scale spatio-temporal mode on directly-coupled (through triadic interactions) small scales in a turbulent boundary layer forced by a spatially-impulsive dynamic wall-roughness patch was previously demonstrated by the authors (J. Fluid Mech. 2015, vol. 767, R4). The experimental set-up was later enhanced to allow for simultaneous forcing of multiple scales in the flow. Results and analysis are presented from a new set of novel experiments where two distinct large scales are forced in the flow by a dynamic wall-roughness patch. The internal non-linear forcing of two other scales with triadic consistency to the artificially forced large scales, corresponding to sum and difference in wavenumbers, is dominated by the latter. This allows for a forcing-response (input-output) type analysis of the two triadic scales, and naturally lends itself to a resolvent operator based model (e.g. McKeon & Sharma, J. Fluid Mech. 2010, vol. 658, pp. 336-382) of the governing Navier-Stokes equations. The support of AFOSR (grant #FA 9550-12-1-0469, program manager D. Smith) is gratefully acknowledged.

  18. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications.

    PubMed

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-05-12

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O ( 2 N 2 ) degrees of freedom (DOF) with O ( N ) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array.

  19. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications

    PubMed Central

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-01-01

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O(2N2) degrees of freedom (DOF) with O(N) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array. PMID:28498329

  20. Concordant integrative gene set enrichment analysis of multiple large-scale two-sample expression data sets.

    PubMed

    Lai, Yinglei; Zhang, Fanni; Nayak, Tapan K; Modarres, Reza; Lee, Norman H; McCaffrey, Timothy A

    2014-01-01

    Gene set enrichment analysis (GSEA) is an important approach to the analysis of coordinate expression changes at a pathway level. Although many statistical and computational methods have been proposed for GSEA, the issue of a concordant integrative GSEA of multiple expression data sets has not been well addressed. Among different related data sets collected for the same or similar study purposes, it is important to identify pathways or gene sets with concordant enrichment. We categorize the underlying true states of differential expression into three representative categories: no change, positive change and negative change. Due to data noise, what we observe from experiments may not indicate the underlying truth. Although these categories are not observed in practice, they can be considered in a mixture model framework. Then, we define the mathematical concept of concordant gene set enrichment and calculate its related probability based on a three-component multivariate normal mixture model. The related false discovery rate can be calculated and used to rank different gene sets. We used three published lung cancer microarray gene expression data sets to illustrate our proposed method. One analysis based on the first two data sets was conducted to compare our result with a previous published result based on a GSEA conducted separately for each individual data set. This comparison illustrates the advantage of our proposed concordant integrative gene set enrichment analysis. Then, with a relatively new and larger pathway collection, we used our method to conduct an integrative analysis of the first two data sets and also all three data sets. Both results showed that many gene sets could be identified with low false discovery rates. A consistency between both results was also observed. A further exploration based on the KEGG cancer pathway collection showed that a majority of these pathways could be identified by our proposed method. This study illustrates that we can improve detection power and discovery consistency through a concordant integrative analysis of multiple large-scale two-sample gene expression data sets.

  1. Moving to stay in place: behavioral mechanisms for coexistence of African large carnivores.

    PubMed

    Vanak, Abi Tamim; Fortin, Daniel; Thaker, Maria; Ogden, Monika; Owen, Cailey; Greatwood, Sophie; Slotow, Rob

    2013-11-01

    Most ecosystems have multiple predator species that not only compete for shared prey, but also pose direct threats to each other. These intraguild interactions are key drivers of carnivore community structure, with ecosystem-wide cascading effects. Yet, behavioral mechanisms for coexistence of multiple carnivore species remain poorly understood. The challenges of studying large, free-ranging carnivores have resulted in mainly coarse-scale examination of behavioral strategies without information about all interacting competitors. We overcame some of these challenges by examining the concurrent fine-scale movement decisions of almost all individuals of four large mammalian carnivore species in a closed terrestrial system. We found that the intensity ofintraguild interactions did not follow a simple hierarchical allometric pattern, because spatial and behavioral tactics of subordinate species changed with threat and resource levels across seasons. Lions (Panthera leo) were generally unrestricted and anchored themselves in areas rich in not only their principal prey, but also, during periods of resource limitation (dry season), rich in the main prey for other carnivores. Because of this, the greatest cost (potential intraguild predation) for subordinate carnivores was spatially coupled with the highest potential benefit of resource acquisition (prey-rich areas), especially in the dry season. Leopard (P. pardus) and cheetah (Acinonyx jubatus) overlapped with the home range of lions but minimized their risk using fine-scaled avoidance behaviors and restricted resource acquisition tactics. The cost of intraguild competition was most apparent for cheetahs, especially during the wet season, as areas with energetically rewarding large prey (wildebeest) were avoided when they overlapped highly with the activity areas of lions. Contrary to expectation, the smallest species (African wild dog, Lycaon pictus) did not avoid only lions, but also used multiple tactics to minimize encountering all other competitors. Intraguild competition thus forced wild dogs into areas with the lowest resource availability year round. Coexistence of multiple carnivore species has typically been explained by dietary niche separation, but our multi-scaled movement results suggest that differences in resource acquisition may instead be a consequence of avoiding intraguild competition. We generate a more realistic representation of hierarchical behavioral interactions that may ultimately drive spatially explicit trophic structures of multi-predator communities.

  2. Stability of large-scale systems with stable and unstable subsystems.

    NASA Technical Reports Server (NTRS)

    Grujic, Lj. T.; Siljak, D. D.

    1972-01-01

    The purpose of this paper is to develop new methods for constructing vector Liapunov functions and broaden the application of Liapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. With minor technical adjustments, the same criterion can be used to determine connective asymptotic stability of large-scale systems subject to structural perturbations. By redefining the constraints imposed on the interconnections among the subsystems, the considered class of systems is broadened in an essential way to include composite systems with unstable subsystems. In this way, the theory is brought substantially closer to reality since stability of all subsystems is no longer a necessary assumption in establishing stability of the overall composite system.

  3. The Use of Weighted Graphs for Large-Scale Genome Analysis

    PubMed Central

    Zhou, Fang; Toivonen, Hannu; King, Ross D.

    2014-01-01

    There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061

  4. Comparison of Habitat-Specific Nutrient Removal and Release in Pacific NW Salt Marshes at Multiple Spatial Scales - CERF

    EPA Science Inventory

    Wetlands can be sources, sinks and transformers of nutrients, although it is their role in nutrient removal that is valued as a water purification ecosystem service. In order to quantify that service for any wetland, it is important to understand the drivers of nutrient removal w...

  5. Multiple-scale roost habitat comparisons of female Merriam's wild turkeys in the southern Black Hills, South Dakota

    Treesearch

    Daniel J. Thompson; Mark A. Rumble; Lester D. Flake; Chad P. Lehman

    2009-01-01

    Because quantity and quality of roosting habitat can affect Merriam's Wild Turkey (Meleagris gallopavo merriami) distribution, we described habitat characteristics of Merriam's turkey roost sites in the southern Black Hills of South Dakota. Varying proportions of Merriam's turkeys in the southern Black Hills depended on supplemental feed from livestock...

  6. Effectiveness of team-based learning methodology in teaching transfusion medicine to medical undergraduates in third semester: A comparative study.

    PubMed

    Doshi, Neena Piyush

    2017-01-01

    Team-based learning (TBL) combines small and large group learning by incorporating multiple small groups in a large group setting. It is a teacher-directed method that encourages student-student interaction. This study compares student learning and teaching satisfaction between conventional lecture and TBL in the subject of pathology. The present study is aimed to assess the effectiveness of TBL method of teaching over the conventional lecture. The present study was conducted in the Department of Pathology, GMERS Medical College and General Hospital, Gotri, Vadodara, Gujarat. The study population comprised 126 students of second-year MBBS, in their third semester of the academic year 2015-2016. "Hemodynamic disorders" were taught by conventional method and "transfusion medicine" by TBL method. Effectiveness of both the methods was assessed. A posttest multiple choice question was conducted at the end of "hemodynamic disorders." Assessment of TBL was based on individual score, team score, and each member's contribution to the success of the team. The individual score and overall score were compared with the posttest score on "hemodynamic disorders." A feedback was taken from the students regarding their experience with TBL. Tukey's multiple comparisons test and ANOVA summary were used to find the significance of scores between didactic and TBL methods. Student feedback was taken using "Student Satisfaction Scale" based on Likert scoring method. The mean of student scores by didactic, Individual Readiness Assurance Test (score "A"), and overall (score "D") was 49.8% (standard deviation [SD]-14.8), 65.6% (SD-10.9), and 65.6% (SD-13.8), respectively. The study showed positive educational outcome in terms of knowledge acquisition, participation and engagement, and team performance with TBL.

  7. Ant colony optimisation-direct cover: a hybrid ant colony direct cover technique for multi-level synthesis of multiple-valued logic functions

    NASA Astrophysics Data System (ADS)

    Abd-El-Barr, Mostafa

    2010-12-01

    The use of non-binary (multiple-valued) logic in the synthesis of digital systems can lead to savings in chip area. Advances in very large scale integration (VLSI) technology have enabled the successful implementation of multiple-valued logic (MVL) circuits. A number of heuristic algorithms for the synthesis of (near) minimal sum-of products (two-level) realisation of MVL functions have been reported in the literature. The direct cover (DC) technique is one such algorithm. The ant colony optimisation (ACO) algorithm is a meta-heuristic that uses constructive greediness to explore a large solution space in finding (near) optimal solutions. The ACO algorithm mimics the ant's behaviour in the real world in using the shortest path to reach food sources. We have previously introduced an ACO-based heuristic for the synthesis of two-level MVL functions. In this article, we introduce the ACO-DC hybrid technique for the synthesis of multi-level MVL functions. The basic idea is to use an ant to decompose a given MVL function into a number of levels and then synthesise each sub-function using a DC-based technique. The results obtained using the proposed approach are compared to those obtained using existing techniques reported in the literature. A benchmark set consisting of 50,000 randomly generated 2-variable 4-valued functions is used in the comparison. The results obtained using the proposed ACO-DC technique are shown to produce efficient realisation in terms of the average number of gates (as a measure of chip area) needed for the synthesis of a given MVL function.

  8. Similarity spectra analysis of high-performance jet aircraft noise.

    PubMed

    Neilsen, Tracianne B; Gee, Kent L; Wall, Alan T; James, Michael M

    2013-04-01

    Noise measured in the vicinity of an F-22A Raptor has been compared to similarity spectra found previously to represent mixing noise from large-scale and fine-scale turbulent structures in laboratory-scale jet plumes. Comparisons have been made for three engine conditions using ground-based sideline microphones, which covered a large angular aperture. Even though the nozzle geometry is complex and the jet is nonideally expanded, the similarity spectra do agree with large portions of the measured spectra. Toward the sideline, the fine-scale similarity spectrum is used, while the large-scale similarity spectrum provides a good fit to the area of maximum radiation. Combinations of the two similarity spectra are shown to match the data in between those regions. Surprisingly, a combination of the two is also shown to match the data at the farthest aft angle. However, at high frequencies the degree of congruity between the similarity and the measured spectra changes with engine condition and angle. At the higher engine conditions, there is a systematically shallower measured high-frequency slope, with the largest discrepancy occurring in the regions of maximum radiation.

  9. New Statistical Model for Variability of Aerosol Optical Thickness: Theory and Application to MODIS Data over Ocean

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian

    2016-01-01

    A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.

  10. MEAN-FIELD MODELING OF AN α{sup 2} DYNAMO COUPLED WITH DIRECT NUMERICAL SIMULATIONS OF RIGIDLY ROTATING CONVECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@harbor.kobe-u.ac.jp, E-mail: sano@ile.osaka-u.ac.jp

    2014-10-10

    The mechanism of large-scale dynamos in rigidly rotating stratified convection is explored by direct numerical simulations (DNS) in Cartesian geometry. A mean-field dynamo model is also constructed using turbulent velocity profiles consistently extracted from the corresponding DNS results. By quantitative comparison between the DNS and our mean-field model, it is demonstrated that the oscillatory α{sup 2} dynamo wave, excited and sustained in the convection zone, is responsible for large-scale magnetic activities such as cyclic polarity reversal and spatiotemporal migration. The results provide strong evidence that a nonuniformity of the α-effect, which is a natural outcome of rotating stratified convection, canmore » be an important prerequisite for large-scale stellar dynamos, even without the Ω-effect.« less

  11. The Social Life of a Data Base

    NASA Technical Reports Server (NTRS)

    Linde, Charlotte; Wales, Roxana; Clancy, Dan (Technical Monitor)

    2002-01-01

    This paper presents the complex social life of a large data base. The topics include: 1) Social Construction of Mechanisms of Memory; 2) Data Bases: The Invisible Memory Mechanism; 3) The Human in the Machine; 4) Data of the Study: A Large-Scale Problem Reporting Data Base; 5) The PRACA Study; 6) Description of PRACA; 7) PRACA and Paper; 8) Multiple Uses of PRACA; 9) The Work of PRACA; 10) Multiple Forms of Invisibility; 11) Such Systems are Everywhere; and 12) Two Morals to the Story. This paper is in viewgraph form.

  12. MRMPROBS: a data assessment and metabolite identification tool for large-scale multiple reaction monitoring based widely targeted metabolomics.

    PubMed

    Tsugawa, Hiroshi; Arita, Masanori; Kanazawa, Mitsuhiro; Ogiwara, Atsushi; Bamba, Takeshi; Fukusaki, Eiichiro

    2013-05-21

    We developed a new software program, MRMPROBS, for widely targeted metabolomics by using the large-scale multiple reaction monitoring (MRM) mode. The strategy became increasingly popular for the simultaneous analysis of up to several hundred metabolites at high sensitivity, selectivity, and quantitative capability. However, the traditional method of assessing measured metabolomics data without probabilistic criteria is not only time-consuming but is often subjective and makeshift work. Our program overcomes these problems by detecting and identifying metabolites automatically, by separating isomeric metabolites, and by removing background noise using a probabilistic score defined as the odds ratio from an optimized multivariate logistic regression model. Our software program also provides a user-friendly graphical interface to curate and organize data matrices and to apply principal component analyses and statistical tests. For a demonstration, we conducted a widely targeted metabolome analysis (152 metabolites) of propagating Saccharomyces cerevisiae measured at 15 time points by gas and liquid chromatography coupled to triple quadrupole mass spectrometry. MRMPROBS is a useful and practical tool for the assessment of large-scale MRM data available to any instrument or any experimental condition.

  13. A linear concatenation strategy to construct 5'-enriched amplified cDNA libraries using multiple displacement amplification.

    PubMed

    Gadkar, Vijay J; Filion, Martin

    2013-06-01

    In various experimental systems, limiting available amounts of RNA may prevent a researcher from performing large-scale analyses of gene transcripts. One way to circumvent this is to 'pre-amplify' the starting RNA/cDNA, so that sufficient amounts are available for any downstream analysis. In the present study, we report the development of a novel protocol for constructing amplified cDNA libraries using the Phi29 DNA polymerase based multiple displacement amplification (MDA) system. Using as little as 200 ng of total RNA, we developed a linear concatenation strategy to make the single-stranded cDNA template amenable for MDA. The concatenation, made possible by the template switching property of the reverse transcriptase enzyme, resulted in the amplified cDNA library with intact 5' ends. MDA generated micrograms of template, allowing large-scale polymerase chain reaction analyses or other large-scale downstream applications. As the amplified cDNA library contains intact 5' ends, it is also compatible with 5' RACE analyses of specific gene transcripts. Empirical validation of this protocol is demonstrated on a highly characterized (tomato) and an uncharacterized (corn gromwell) experimental system.

  14. Examination of Cross-Scale Coupling During Auroral Events using RENU2 and ISINGLASS Sounding Rocket Data.

    NASA Astrophysics Data System (ADS)

    Kenward, D. R.; Lessard, M.; Lynch, K. A.; Hysell, D. L.; Hampton, D. L.; Michell, R.; Samara, M.; Varney, R. H.; Oksavik, K.; Clausen, L. B. N.; Hecht, J. H.; Clemmons, J. H.; Fritz, B.

    2017-12-01

    The RENU2 sounding rocket (launched from Andoya rocket range on December 13th, 2015) observed Poleward Moving Auroral Forms within the dayside cusp. The ISINGLASS rockets (launched from Poker Flat rocket range on February 22, 2017 and March 2, 2017) both observed aurora during a substorm event. Despite observing very different events, both campaigns witnessed a high degree of small scale structuring within the larger auroral boundary, including Alfvenic signatures. These observations suggest a method of coupling large-scale energy input to fine scale structures within aurorae. During RENU2, small (sub-km) scale drivers persist for long (10s of minutes) time scales and result in large scale ionospheric (thermal electron) and thermospheric response (neutral upwelling). ISINGLASS observations show small scale drivers, but with short (minute) time scales, with ionospheric response characterized by the flight's thermal electron instrument (ERPA). The comparison of the two flights provides an excellent opportunity to examine ionospheric and thermospheric response to small scale drivers over different integration times.

  15. Differences in intermittent and continuous fecal shedding patterns between natural and experimental Mycobacterium avium subspecies paratuberculosis infections in cattle

    USDA-ARS?s Scientific Manuscript database

    The objective of this paper is to study shedding patterns of cows infected with Mycobacterium avium subsp. paratuberculosis (MAP). While multiple single farm studies of MAP dynamics were reported, there is not large-scale meta-analysis of both natural and experimental infections. Large difference...

  16. Interactive comparison and remediation of collections of macromolecular structures.

    PubMed

    Moriarty, Nigel W; Liebschner, Dorothee; Klei, Herbert E; Echols, Nathaniel; Afonine, Pavel V; Headd, Jeffrey J; Poon, Billy K; Adams, Paul D

    2018-01-01

    Often similar structures need to be compared to reveal local differences throughout the entire model or between related copies within the model. Therefore, a program to compare multiple structures and enable correction any differences not supported by the density map was written within the Phenix framework (Adams et al., Acta Cryst 2010; D66:213-221). This program, called Structure Comparison, can also be used for structures with multiple copies of the same protein chain in the asymmetric unit, that is, as a result of non-crystallographic symmetry (NCS). Structure Comparison was designed to interface with Coot(Emsley et al., Acta Cryst 2010; D66:486-501) and PyMOL(DeLano, PyMOL 0.99; 2002) to facilitate comparison of large numbers of related structures. Structure Comparison analyzes collections of protein structures using several metrics, such as the rotamer conformation of equivalent residues, displays the results in tabular form and allows superimposed protein chains and density maps to be quickly inspected and edited (via the tools in Coot) for consistency, completeness and correctness. © 2017 The Protein Society.

  17. Comparison of prestellar core elongations and large-scale molecular cloud structures in the Lupus I region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.

    2014-08-10

    Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less

  18. Large-scale environments of narrow-line Seyfert 1 galaxies

    NASA Astrophysics Data System (ADS)

    Järvelä, E.; Lähteenmäki, A.; Lietzen, H.; Poudel, A.; Heinämäki, P.; Einasto, M.

    2017-09-01

    Studying large-scale environments of narrow-line Seyfert 1 (NLS1) galaxies gives a new perspective on their properties, particularly their radio loudness. The large-scale environment is believed to have an impact on the evolution and intrinsic properties of galaxies, however, NLS1 sources have not been studied in this context before. We have a large and diverse sample of 1341 NLS1 galaxies and three separate environment data sets constructed using Sloan Digital Sky Survey. We use various statistical methods to investigate how the properties of NLS1 galaxies are connected to the large-scale environment, and compare the large-scale environments of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to study how they are related. NLS1 galaxies reside in less dense environments than any of the comparison samples, thus confirming their young age. The average large-scale environment density and environmental distribution of NLS1 sources is clearly different compared to BLS1 galaxies, thus it is improbable that they could be the parent population of NLS1 galaxies and unified by orientation. Within the NLS1 class there is a trend of increasing radio loudness with increasing large-scale environment density, indicating that the large-scale environment affects their intrinsic properties. Our results suggest that the NLS1 class of sources is not homogeneous, and furthermore, that a considerable fraction of them are misclassified. We further support a published proposal to replace the traditional classification to radio-loud, and radio-quiet or radio-silent sources with a division into jetted and non-jetted sources.

  19. Large Scale Winter Time Disturbances in Meteor Winds over Central and Eastern Europe

    NASA Technical Reports Server (NTRS)

    Greisiger, K. M.; Portnyagin, Y. I.; Lysenko, I. A.

    1984-01-01

    Daily zonal wind data of the four pre-MAP-winters 1978/79 to 1981/82 obtained over Central Europe and Eastern Europe by the radar meteor method were studied. Available temperature and satellite radiance data of the middle and upper stratosphere were used for comparison, as well as wind data from Canada. The existence or nonexistence of coupling between the observed large scale zonal wind disturbances in the upper mesopause region (90 to 100 km) and corresponding events in the stratosphere are discussed.

  20. A climate-change adaptation framework to reduce continental-scale vulnerability across conservation reserves

    Treesearch

    D.R. Magness; J.M. Morton; F. Huettmann; F.S. Chapin; A.D. McGuire

    2011-01-01

    Rapid climate change, in conjunction with other anthropogenic drivers, has the potential to cause mass species extinction. To minimize this risk, conservation reserves need to be coordinated at multiple spatial scales because the climate envelopes of many species may shift rapidly across large geographic areas. In addition, novel species assemblages and ecological...

  1. Impact of Accumulated Error on Item Response Theory Pre-Equating with Mixed Format Tests

    ERIC Educational Resources Information Center

    Keller, Lisa A.; Keller, Robert; Cook, Robert J.; Colvin, Kimberly F.

    2016-01-01

    The equating of tests is an essential process in high-stakes, large-scale testing conducted over multiple forms or administrations. By adjusting for differences in difficulty and placing scores from different administrations of a test on a common scale, equating allows scores from these different forms and administrations to be directly compared…

  2. A multi-scale assessment of population connectivity in African lions (Panthera leo) in response to landscape change

    Treesearch

    Samuel A. Cushman; Nicholas B. Elliot; David W. Macdonald; Andrew J. Loveridge

    2015-01-01

    Habitat loss and fragmentation are among the major drivers of population declines and extinction, particularly in large carnivores. Connectivity models provide practical tools for assessing fragmentation effects and developing mitigation or conservation responses. To be useful to conservation practitioners, connectivity models need to incorporate multiple scales and...

  3. Managing landscapes at multiple scales for sustainability of ecosystem functions (Preface)

    Treesearch

    R.A. Birdsey; R. Lucas; Y. Pan; G. Sun; E.J. Gustafson; A.H.  Perera

    2010-01-01

    The science of landscape ecology is a rapidly evolving academic field with an emphasis on studying large-scale spatial heterogeneity created by natural influences and human activities. These advances have important implications for managing and conserving natural resources. At a September 2008 IUFRO conference in Chengdu, Sichuan, P.R. China, we highlighted both the...

  4. Nature of global large-scale sea level variability in relation to atmospheric forcing: A modeling study

    NASA Astrophysics Data System (ADS)

    Fukumori, Ichiro; Raghunath, Ramanujam; Fu, Lee-Lueng

    1998-03-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equation model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to January 1994. The physical nature of sea level's temporal variability from periods of days to a year is examined on the basis of spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements. The study elucidates and diagnoses the inhomogeneous physics of sea level change in space and frequency domain. At midlatitudes, large-scale sea level variability is primarily due to steric changes associated with the seasonal heating and cooling cycle of the surface layer. In comparison, changes in the tropics and high latitudes are mainly wind driven. Wind-driven variability exhibits a strong latitudinal dependence in itself. Wind-driven changes are largely baroclinic in the tropics but barotropic at higher latitudes. Baroclinic changes are dominated by the annual harmonic of the first baroclinic mode and is largest off the equator; variabilities associated with equatorial waves are smaller in comparison. Wind-driven barotropic changes exhibit a notable enhancement over several abyssal plains in the Southern Ocean, which is likely due to resonant planetary wave modes in basins semienclosed by discontinuities in potential vorticity. Otherwise, barotropic sea level changes are typically dominated by high frequencies with as much as half the total variance in periods shorter than 20 days, reflecting the frequency spectra of wind stress curl. Implications of the findings with regards to analyzing observations and data assimilation are discussed.

  5. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.

  6. FASMA: a service to format and analyze sequences in multiple alignments.

    PubMed

    Costantini, Susan; Colonna, Giovanni; Facchiano, Angelo M

    2007-12-01

    Multiple sequence alignments are successfully applied in many studies for under- standing the structural and functional relations among single nucleic acids and protein sequences as well as whole families. Because of the rapid growth of sequence databases, multiple sequence alignments can often be very large and difficult to visualize and analyze. We offer a new service aimed to visualize and analyze the multiple alignments obtained with different external algorithms, with new features useful for the comparison of the aligned sequences as well as for the creation of a final image of the alignment. The service is named FASMA and is available at http://bioinformatica.isa.cnr.it/FASMA/.

  7. Large-Scale Flows and Magnetic Fields Produced by Rotating Convection in a Quasi-Geostrophic Model of Planetary Cores

    NASA Astrophysics Data System (ADS)

    Guervilly, C.; Cardin, P.

    2017-12-01

    Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.

  8. Chronic, Wireless Recordings of Large Scale Brain Activity in Freely Moving Rhesus Monkeys

    PubMed Central

    Schwarz, David A.; Lebedev, Mikhail A.; Hanson, Timothy L.; Dimitrov, Dragan F.; Lehew, Gary; Meloy, Jim; Rajangam, Sankaranarayani; Subramanian, Vivek; Ifft, Peter J.; Li, Zheng; Ramakrishnan, Arjun; Tate, Andrew; Zhuang, Katie; Nicolelis, Miguel A.L.

    2014-01-01

    Advances in techniques for recording large-scale brain activity contribute to both the elucidation of neurophysiological principles and the development of brain-machine interfaces (BMIs). Here we describe a neurophysiological paradigm for performing tethered and wireless large-scale recordings based on movable volumetric three-dimensional (3D) multielectrode implants. This approach allowed us to isolate up to 1,800 units per animal and simultaneously record the extracellular activity of close to 500 cortical neurons, distributed across multiple cortical areas, in freely behaving rhesus monkeys. The method is expandable, in principle, to thousands of simultaneously recorded channels. It also allows increased recording longevity (5 consecutive years), and recording of a broad range of behaviors, e.g. social interactions, and BMI paradigms in freely moving primates. We propose that wireless large-scale recordings could have a profound impact on basic primate neurophysiology research, while providing a framework for the development and testing of clinically relevant neuroprostheses. PMID:24776634

  9. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  10. Efficient RNA structure comparison algorithms.

    PubMed

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  11. Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy.

    PubMed

    Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Boccara, A Claude; Bourdieu, Laurent

    2011-11-01

    Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.

  12. Fracture analysis of stiffened panels under biaxial loading with widespread cracking

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; Dawicke, D. S.

    1995-01-01

    An elastic-plastic finite-element analysis with a critical crack-tip-opening angle (CTOA) fracture criterion was used to model stable crack growth and fracture of 2024-T3 aluminum alloy (bare and clad) panels for several thicknesses. The panels had either single or multiple-site damage (MSD) cracks subjected to uniaxial or biaxial loading. Analyses were also conducted on cracked stiffened panels with single or MSD cracks. The critical CTOA value for each thickness was determined by matching the failure load on a middle-crack tension specimen. Comparisons were made between the critical angles determined from the finite-element analyses and those measured with photographic methods. Predicted load-against-crack extension and failure loads for panels under biaxial loading, panels with MSD cracks, and panels with various number of stiffeners were compared with test data, whenever possible. The predicted results agreed well with the test data even for large-scale plastic deformations. The analyses were also able to predict stable tearing behavior of a large lead crack in the presence of MSD cracks. The analyses were then used to study the influence of stiffeners on residual strength in the presence of widespread fatigue cracking. Small MSD cracks were found to greatly reduce the residual strength for large lead cracks even for stiffened panels.

  13. Fracture analysis of stiffened panels under biaxial loading with widespread cracking

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.

    1995-01-01

    An elastic-plastic finite-element analysis with a critical crack-tip opening angle (CTOA) fracture criterion was used to model stable crack growth and fracture of 2024-T3 aluminum alloy (bare and clad) panels for several thicknesses. The panels had either single or multiple-site damage (MSD) cracks subjected to uniaxial or biaxial loading. Analyses were also conducted on cracked stiffened panels with single or MSD cracks. The critical CTOA value for each thickness was determined by matching the failure load on a middle-crack tension specimen. Comparisons were made between the critical angles determined from the finite-element analyses and those measured with photographic methods. Predicted load-against-crack extension and failure loads for panels under biaxial loading, panels with MSD cracks, and panels with various numbers of stiffeners were compared with test data whenever possible. The predicted results agreed well with the test data even for large-scale plastic deformations. The analyses were also able to predict stable tearing behavior of a large lead crack in the presence of MSD cracks. The analyses were then used to study the influence of stiffeners on residual strength in the presence of widespread fatigue cracking. Small MSD cracks were found to greatly reduce the residual strength for large lead cracks even for stiffened panels.

  14. Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy

    NASA Astrophysics Data System (ADS)

    Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Claude Boccara, A.; Bourdieu, Laurent

    2011-11-01

    Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.

  15. High-Speed Interrogation for Large-Scale Fiber Bragg Grating Sensing

    PubMed Central

    Hu, Chenyuan; Bai, Wei

    2018-01-01

    A high-speed interrogation scheme for large-scale fiber Bragg grating (FBG) sensing arrays is presented. This technique employs parallel computing and pipeline control to modulate incident light and demodulate the reflected sensing signal. One Electro-optic modulator (EOM) and one semiconductor optical amplifier (SOA) were used to generate a phase delay to filter reflected spectrum form multiple candidate FBGs with the same optical path difference (OPD). Experimental results showed that the fastest interrogation delay time for the proposed method was only about 27.2 us for a single FBG interrogation, and the system scanning period was only limited by the optical transmission delay in the sensing fiber owing to the multiple simultaneous central wavelength calculations. Furthermore, the proposed FPGA-based technique had a verified FBG wavelength demodulation stability of ±1 pm without average processing. PMID:29495263

  16. High-Speed Interrogation for Large-Scale Fiber Bragg Grating Sensing.

    PubMed

    Hu, Chenyuan; Bai, Wei

    2018-02-24

    A high-speed interrogation scheme for large-scale fiber Bragg grating (FBG) sensing arrays is presented. This technique employs parallel computing and pipeline control to modulate incident light and demodulate the reflected sensing signal. One Electro-optic modulator (EOM) and one semiconductor optical amplifier (SOA) were used to generate a phase delay to filter reflected spectrum form multiple candidate FBGs with the same optical path difference (OPD). Experimental results showed that the fastest interrogation delay time for the proposed method was only about 27.2 us for a single FBG interrogation, and the system scanning period was only limited by the optical transmission delay in the sensing fiber owing to the multiple simultaneous central wavelength calculations. Furthermore, the proposed FPGA-based technique had a verified FBG wavelength demodulation stability of ±1 pm without average processing.

  17. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  18. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE PAGES

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    2017-01-16

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  19. A novel tool for naturalistic assessment of behavioural dysregulation after traumatic brain injury: A pilot study.

    PubMed

    McKeon, Ashlee; Terhorst, Lauren; Skidmore, Elizabeth; Ding, Dan; Cooper, Rory; McCue, Michael

    2017-01-01

    This study aimed to develop a novel tool for measuring behavioural dysregulation in adults with traumatic brain injury (TBI) using objective data sources and real-world application and provide preliminary evidence for its psychometric properties. Fourteen adults with TBI receiving services at a local brain injury rehabilitation programme completed multiple assessments of behaviour and followed by a series of challenging problem-solving tasks while being video recorded. Trained clinicians completed post-hoc behavioural assessments using the behavioural dysregulation ratings scale, and behavioural event data were then extracted for comparison with self-report measures. Subject matter experts in neurorehabilitation were in 100% agreement that preliminarily, the new tool measured the construct of behavioural dysregulation. Construct validity was established through strong convergence with 'like' measures and weak correlation with 'unlike' measures. Substantial inter-rater reliability was established between two trained clinician raters. This study provides preliminary evidence supporting the use of a new precision measurement tool of behaviour in post-acute TBI that has the capability to be deployed naturalistically where deficits truly manifest. Future large-scaled confirmatory psychometric trials are warranted to further establish the utility of this new tool in rehabilitation research.

  20. Microbial denitrification dominates nitrate losses from forest ecosystems

    PubMed Central

    Fang, Yunting; Koba, Keisuke; Makabe, Akiko; Takahashi, Chieko; Zhu, Weixing; Hayashi, Takahiro; Hokari, Azusa A.; Urakawa, Rieko; Bai, Edith; Houlton, Benjamin Z.; Xi, Dan; Zhang, Shasha; Matsushita, Kayo; Tu, Ying; Liu, Dongwei; Zhu, Feifei; Wang, Zhenyu; Zhou, Guoyi; Chen, Dexiang; Makita, Tomoko; Toda, Hiroto; Liu, Xueyan; Chen, Quansheng; Zhang, Deqiang; Li, Yide; Yoh, Muneoki

    2015-01-01

    Denitrification removes fixed nitrogen (N) from the biosphere, thereby restricting the availability of this key limiting nutrient for terrestrial plant productivity. This microbially driven process has been exceedingly difficult to measure, however, given the large background of nitrogen gas (N2) in the atmosphere and vexing scaling issues associated with heterogeneous soil systems. Here, we use natural abundance of N and oxygen isotopes in nitrate (NO3−) to examine dentrification rates across six forest sites in southern China and central Japan, which span temperate to tropical climates, as well as various stand ages and N deposition regimes. Our multiple stable isotope approach across soil to watershed scales shows that traditional techniques underestimate terrestrial denitrification fluxes by up to 98%, with annual losses of 5.6–30.1 kg of N per hectare via this gaseous pathway. These N export fluxes are up to sixfold higher than NO3− leaching, pointing to widespread dominance of denitrification in removing NO3− from forest ecosystems across a range of conditions. Further, we report that the loss of NO3− to denitrification decreased in comparison to leaching pathways in sites with the highest rates of anthropogenic N deposition. PMID:25605898

  1. Carbon pools and fluxes in small temperate forest landscapes: Variability and implications for sampling design

    Treesearch

    John B. Bradford; Peter Weishampel; Marie-Louise Smith; Randall Kolka; Richard A. Birdsey; Scott V. Ollinger; Michael G. Ryan

    2010-01-01

    Assessing forest carbon storage and cycling over large areas is a growing challenge that is complicated by the inherent heterogeneity of forest systems. Field measurements must be conducted and analyzed appropriately to generate precise estimates at scales large enough for mapping or comparison with remote sensing data. In this study we examined...

  2. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  3. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  4. Weak gravitational lensing due to large-scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Jaroszynski, Michal; Park, Changbom; Paczynski, Bohdan; Gott, J. Richard, III

    1990-01-01

    The effect of the large-scale structure of the universe on the propagation of light rays is studied. The development of the large-scale density fluctuations in the omega = 1 universe is calculated within the cold dark matter scenario using a smooth particle approximation. The propagation of about 10 to the 6th random light rays between the redshift z = 5 and the observer was followed. It is found that the effect of shear is negligible, and the amplification of single images is dominated by the matter in the beam. The spread of amplifications is very small. Therefore, the filled-beam approximation is very good for studies of strong lensing by galaxies or clusters of galaxies. In the simulation, the column density was averaged over a comoving area of approximately (1/h Mpc)-squared. No case of a strong gravitational lensing was found, i.e., no 'over-focused' image that would suggest that a few images might be present. Therefore, the large-scale structure of the universe as it is presently known does not produce multiple images with gravitational lensing on a scale larger than clusters of galaxies.

  5. Using multi-scale entropy and principal component analysis to monitor gears degradation via the motor current signature analysis

    NASA Astrophysics Data System (ADS)

    Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir

    2017-06-01

    This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.

  6. The Kinematics of Multiple-peaked Lyα Emission in Star-forming Galaxies at z ~ 2-3

    NASA Astrophysics Data System (ADS)

    Kulas, Kristin R.; Shapley, Alice E.; Kollmeier, Juna A.; Zheng, Zheng; Steidel, Charles C.; Hainline, Kevin N.

    2012-01-01

    We present new results on the Lyα emission-line kinematics of 18 z ~ 2-3 star-forming galaxies with multiple-peaked Lyα profiles. With our large spectroscopic database of UV-selected star-forming galaxies at these redshifts, we have determined that ~30% of such objects with detectable Lyα emission display multiple-peaked emission profiles. These profiles provide additional constraints on the escape of Lyα photons due to the rich velocity structure in the emergent line. Despite recent advances in modeling the escape of Lyα from star-forming galaxies at high redshifts, comparisons between models and data are often missing crucial observational information. Using Keck II NIRSPEC spectra of Hα (z ~ 2) and [O III]λ5007 (z ~ 3), we have measured accurate systemic redshifts, rest-frame optical nebular velocity dispersions, and emission-line fluxes for the objects in the sample. In addition, rest-frame UV luminosities and colors provide estimates of star formation rates and the degree of dust extinction. In concert with the profile sub-structure, these measurements provide critical constraints on the geometry and kinematics of interstellar gas in high-redshift galaxies. Accurate systemic redshifts allow us to translate the multiple-peaked Lyα profiles into velocity space, revealing that the majority (11/18) display double-peaked emission straddling the velocity-field zero point with stronger red-side emission. Interstellar absorption-line kinematics suggest the presence of large-scale outflows for the majority of objects in our sample, with an average measured interstellar absorption velocity offset of langΔv absrang = -230 km s-1. A comparison of the interstellar absorption kinematics for objects with multiple- and single-peaked Lyα profiles indicate that the multiple-peaked objects are characterized by significantly narrower absorption line widths. We compare our data with the predictions of simple models for outflowing and infalling gas distributions around high-redshift galaxies. While popular "shell" models provide a qualitative match with many of the observations of Lyα emission, we find that in detail there are important discrepancies between the models and data, as well as problems with applying the framework of an expanding thin shell of gas to explain high-redshift galaxy spectra. Our data highlight these inconsistencies, as well as illuminating critical elements for success in future models of outflow and infall in high-redshift galaxies. Based, in part, on data obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA, and was made possible by the generous financial support of the W. M. Keck Foundation.

  7. Using occupancy estimation to assess the effectiveness of a regional multiple-species conservation plan: bats in the Pacific Northwest

    Treesearch

    Theodore Weller

    2008-01-01

    Regional conservation plans are increasingly used to plan for and protect biodiversity at large spatial scales however the means of quantitatively evaluating their effectiveness are rarely specified. Multiple-species approaches, particular those which employ site-occupancy estimation, have been proposed as robust and efficient alternatives for assessing the status of...

  8. Item Response Theory with Covariates (IRT-C): Assessing Item Recovery and Differential Item Functioning for the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.

    2016-01-01

    In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…

  9. Organizational Communication in Emergencies: Using Multiple Channels and Sources to Combat Noise and Capture Attention

    ERIC Educational Resources Information Center

    Stephens, Keri K.; Barrett, Ashley K.; Mahometa, Michael J.

    2013-01-01

    This study relies on information theory, social presence, and source credibility to uncover what best helps people grasp the urgency of an emergency. We surveyed a random sample of 1,318 organizational members who received multiple notifications about a large-scale emergency. We found that people who received 3 redundant messages coming through at…

  10. Designing a Stage-Sensitive Written Assessment of Elementary Students' Scheme for Multiplicative Reasoning

    ERIC Educational Resources Information Center

    Hodkowski, Nicola M.; Gardner, Amber; Jorgensen, Cody; Hornbein, Peter; Johnson, Heather L.; Tzur, Ron

    2016-01-01

    In this paper we examine the application of Tzur's (2007) fine-grained assessment to the design of an assessment measure of a particular multiplicative scheme so that non-interview, good enough data can be obtained (on a large scale) to infer into elementary students' reasoning. We outline three design principles that surfaced through our recent…

  11. The Development of Multiple-Choice Items Consistent with the AP Chemistry Curriculum Framework to More Accurately Assess Deeper Understanding

    ERIC Educational Resources Information Center

    Domyancich, John M.

    2014-01-01

    Multiple-choice questions are an important part of large-scale summative assessments, such as the advanced placement (AP) chemistry exam. However, past AP chemistry exam items often lacked the ability to test conceptual understanding and higher-order cognitive skills. The redesigned AP chemistry exam shows a distinctive shift in item types toward…

  12. The maximum rate of mammal evolution

    NASA Astrophysics Data System (ADS)

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-03-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.

  13. Comparison of WAIS-III Short Forms for Measuring Index and Full-Scale Scores

    ERIC Educational Resources Information Center

    Girard, Todd A.; Axelrod, Bradley N.; Wilkins, Leanne K.

    2010-01-01

    This investigation assessed the ability of the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) short forms to estimate both index and IQ scores in a large, mixed clinical sample (N = 809). More specifically, a commonly used modification of Ward's seven-subtest short form (SF7-A), a recently proposed index-based SF7-C and eight-subtest…

  14. The earth's foreshock, bow shock, and magnetosheath

    NASA Technical Reports Server (NTRS)

    Onsager, T. G.; Thomsen, M. F.

    1991-01-01

    Studies directly pertaining to the earth's foreshock, bow shock, and magnetosheath are reviewed, and some comparisons are made with data on other planets. Topics considered in detail include the electron foreshock, the ion foreshock, the quasi-parallel shock, the quasi-perpendicular shock, and the magnetosheath. Information discussed spans a broad range of disciplines, from large-scale macroscopic plasma phenomena to small-scale microphysical interactions.

  15. Polychaete richness and abundance enhanced in anthropogenically modified estuaries despite high concentrations of toxic contaminants.

    PubMed

    Dafforn, Katherine A; Kelaher, Brendan P; Simpson, Stuart L; Coleman, Melinda A; Hutchings, Pat A; Clark, Graeme F; Knott, Nathan A; Doblin, Martina A; Johnston, Emma L

    2013-01-01

    Ecological communities are increasingly exposed to multiple chemical and physical stressors, but distinguishing anthropogenic impacts from other environmental drivers remains challenging. Rarely are multiple stressors investigated in replicated studies over large spatial scales (>1000 kms) or supported with manipulations that are necessary to interpret ecological patterns. We measured the composition of sediment infaunal communities in relation to anthropogenic and natural stressors at multiple sites within seven estuaries. We observed increases in the richness and abundance of polychaete worms in heavily modified estuaries with severe metal contamination, but no changes in the diversity or abundance of other taxa. Estuaries in which toxic contaminants were elevated also showed evidence of organic enrichment. We hypothesised that the observed response of polychaetes was not a 'positive' response to toxic contamination or a reduction in biotic competition, but due to high levels of nutrients in heavily modified estuaries driving productivity in the water column and enriching the sediment over large spatial scales. We deployed defaunated field-collected sediments from the surveyed estuaries in a small scale experiment, but observed no effects of sediment characteristics (toxic or enriching). Furthermore, invertebrate recruitment instead reflected the low diversity and abundance observed during field surveys of this relatively 'pristine' estuary. This suggests that differences observed in the survey are not a direct consequence of sediment characteristics (even severe metal contamination) but are related to parameters that covary with estuary modification such as enhanced productivity from nutrient inputs and the diversity of the local species pool. This has implications for the interpretation of diversity measures in large-scale monitoring studies in which the observed patterns may be strongly influenced by many factors that covary with anthropogenic modification.

  16. Polychaete Richness and Abundance Enhanced in Anthropogenically Modified Estuaries Despite High Concentrations of Toxic Contaminants

    PubMed Central

    Dafforn, Katherine A.; Kelaher, Brendan P.; Simpson, Stuart L.; Coleman, Melinda A.; Hutchings, Pat A.; Clark, Graeme F.; Knott, Nathan A.; Doblin, Martina A.; Johnston, Emma L.

    2013-01-01

    Ecological communities are increasingly exposed to multiple chemical and physical stressors, but distinguishing anthropogenic impacts from other environmental drivers remains challenging. Rarely are multiple stressors investigated in replicated studies over large spatial scales (>1000 kms) or supported with manipulations that are necessary to interpret ecological patterns. We measured the composition of sediment infaunal communities in relation to anthropogenic and natural stressors at multiple sites within seven estuaries. We observed increases in the richness and abundance of polychaete worms in heavily modified estuaries with severe metal contamination, but no changes in the diversity or abundance of other taxa. Estuaries in which toxic contaminants were elevated also showed evidence of organic enrichment. We hypothesised that the observed response of polychaetes was not a ‘positive’ response to toxic contamination or a reduction in biotic competition, but due to high levels of nutrients in heavily modified estuaries driving productivity in the water column and enriching the sediment over large spatial scales. We deployed defaunated field-collected sediments from the surveyed estuaries in a small scale experiment, but observed no effects of sediment characteristics (toxic or enriching). Furthermore, invertebrate recruitment instead reflected the low diversity and abundance observed during field surveys of this relatively ‘pristine’ estuary. This suggests that differences observed in the survey are not a direct consequence of sediment characteristics (even severe metal contamination) but are related to parameters that covary with estuary modification such as enhanced productivity from nutrient inputs and the diversity of the local species pool. This has implications for the interpretation of diversity measures in large-scale monitoring studies in which the observed patterns may be strongly influenced by many factors that covary with anthropogenic modification. PMID:24098816

  17. Cooperation without Culture? The Null Effect of Generalized Trust on Intentional Homicide: A Cross-National Panel Analysis, 1995–2009

    PubMed Central

    Robbins, Blaine

    2013-01-01

    Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation. PMID:23527211

  18. Lagrangian space consistency relation for large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less

  19. Lagrangian space consistency relation for large scale structure

    DOE PAGES

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-09-29

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less

  20. Comparison of Observations of Sporadic-E Layers in the Nighttime and Daytime Mid-Latitude Ionosphere

    NASA Technical Reports Server (NTRS)

    Pfaff, R.; Freudenreich, H.; Rowland, D.; Klenzing, J.; Clemmons, J.; Larsen, M.; Kudeki, E.; Franke, S.; Urbina, J.; Bullett, T.

    2012-01-01

    A comparison of numerous rocket experiments to investigate mid-latitude sporadic-E layers is presented. Electric field and plasma density data gathered on sounding rockets launched in the presence of sporadic-E layers and QP radar echoes reveal a complex electrodynamics including both DC parameters and plasma waves detected over a large range of scales. We show both DC and wave electric fields and discuss their relationship to intense sporadic-E layers in both nighttime and daytime conditions. Where available, neutral wind observations provide the complete electrodynamic picture revealing an essential source of free energy that both sets up the layers and drives them unstable. Electric field data from the nighttime experiments reveal the presence of km-scale waves as well as well-defined packets of broadband (10's of meters to meters) irregularities. What is surprising is that in both the nighttime and daytime experiments, neither the large scale nor short scale waves appear to be distinctly organized by the sporadic-E density layer itself. The observations are discussed in the context of current theories regarding sporadic-E layer generation and quasi-periodic echoes.

  1. Spatiotemporal analysis of land use and land cover change in the Brazilian Amazon

    PubMed Central

    Li, Guiying; Moran, Emilio; Hetrick, Scott

    2013-01-01

    This paper provides a comparative analysis of land use and land cover (LULC) changes among three study areas with different biophysical environments in the Brazilian Amazon at multiple scales, from per-pixel, polygon, census sector, to study area. Landsat images acquired in the years of 1990/1991, 1999/2000, and 2008/2010 were used to examine LULC change trajectories with the post-classification comparison approach. A classification system composed of six classes – forest, savanna, other-vegetation (secondary succession and plantations), agro-pasture, impervious surface, and water, was designed for this study. A hierarchical-based classification method was used to classify Landsat images into thematic maps. This research shows different spatiotemporal change patterns, composition and rates among the three study areas and indicates the importance of analyzing LULC change at multiple scales. The LULC change analysis over time for entire study areas provides an overall picture of change trends, but detailed change trajectories and their spatial distributions can be better examined at a per-pixel scale. The LULC change at the polygon scale provides the information of the changes in patch sizes over time, while the LULC change at census sector scale gives new insights on how human-induced activities (e.g., urban expansion, roads, and land use history) affect LULC change patterns and rates. This research indicates the necessity to implement change detection at multiple scales for better understanding the mechanisms of LULC change patterns and rates. PMID:24127130

  2. Detection of right-to-left shunts: comparison between the International Consensus and Spencer Logarithmic Scale criteria.

    PubMed

    Lao, Annabelle Y; Sharma, Vijay K; Tsivgoulis, Georgios; Frey, James L; Malkoff, Marc D; Navarro, Jose C; Alexandrov, Andrei V

    2008-10-01

    International Consensus Criteria (ICC) consider right-to-left shunt (RLS) present when Transcranial Doppler (TCD) detects even one microbubble (microB). Spencer Logarithmic Scale (SLS) offers more grades of RLS with detection of >30 microB corresponding to a large shunt. We compared the yield of ICC and SLS in detection and quantification of a large RLS. We prospectively evaluated paradoxical embolism in consecutive patients with ischemic strokes or transient ischemic attack (TIA) using injections of 9 cc saline agitated with 1 cc of air. Results were classified according to ICC [negative (no microB), grade I (1-20 microB), grade II (>20 microB or "shower" appearance of microB), and grade III ("curtain" appearance of microB)] and SLS criteria [negative (no microB), grade I (1-10 microB), grade II (11-30 microB), grade III (31100 microB), grade IV (101300 microB), grade V (>300 microB)]. The RLS size was defined as large (>4 mm) using diameter measurement of the septal defects on transesophageal echocardiography (TEE). TCD comparison to TEE showed 24 true positive, 48 true negative, 4 false positive, and 2 false negative cases (sensitivity 92.3%, specificity 92.3%, positive predictive value (PPV) 85.7%, negative predictive value (NPV) 96%, and accuracy 92.3%) for any RLS presence. Both ICC and SLS were 100% sensitive for detection of large RLS. ICC and SLS criteria yielded a false positive rate of 24.4% and 7.7%, respectively when compared to TEE. Although both grading scales provide agreement as to any shunt presence, using the Spencer Scale grade III or higher can decrease by one-half the number of false positive TCD diagnoses to predict large RLS on TEE.

  3. MANGO Imager Network Observations of Geomagnetic Storm Impact on Midlatitude 630 nm Airglow Emissions

    NASA Astrophysics Data System (ADS)

    Kendall, E. A.; Bhatt, A.

    2017-12-01

    The Midlatitude Allsky-imaging Network for GeoSpace Observations (MANGO) is a network of imagers filtered at 630 nm spread across the continental United States. MANGO is used to image large-scale airglow and aurora features and observes the generation, propagation, and dissipation of medium and large-scale wave activity in the subauroral, mid and low-latitude thermosphere. This network consists of seven all-sky imagers providing continuous coverage over the United States and extending south into Mexico. This network sees high levels of medium and large scale wave activity due to both neutral and geomagnetic storm forcing. The geomagnetic storm observations largely fall into two categories: Stable Auroral Red (SAR) arcs and Large-scale traveling ionospheric disturbances (LSTIDs). In addition, less-often observed effects include anomalous airglow brightening, bright swirls, and frozen-in traveling structures. We will present an analysis of multiple events observed over four years of MANGO network operation. We will provide both statistics on the cumulative observations and a case study of the "Memorial Day Storm" on May 27, 2017.

  4. SLIDE - a web-based tool for interactive visualization of large-scale -omics data.

    PubMed

    Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon

    2018-06-28

    Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.

  5. Tropical warming and the dynamics of endangered primates.

    PubMed

    Wiederholt, Ruscena; Post, Eric

    2010-04-23

    Many primate species are severely threatened, but little is known about the effects of global warming and the associated intensification of El Niño events on primate populations. Here, we document the influences of the El Niño southern oscillation (ENSO) and hemispheric climatic variability on the population dynamics of four genera of ateline (neotropical, large-bodied) primates. All ateline genera experienced either an immediate or a lagged negative effect of El Niño events. ENSO events were also found to influence primate resource levels through neotropical arboreal phenology. Furthermore, frugivorous primates showed a high degree of interspecific population synchrony over large scales across Central and South America attributable to the recent trends in large-scale climate. These results highlight the role of large-scale climatic variation and trends in ateline primate population dynamics, and emphasize that global warming could pose additional threats to the persistence of multiple species of endangered primates.

  6. HSTDEK: Developing a methodology for construction of large-scale, multi-use knowledge bases

    NASA Technical Reports Server (NTRS)

    Freeman, Michael S.

    1987-01-01

    The primary research objectives of the Hubble Space Telescope Design/Engineering Knowledgebase (HSTDEK) are to develop a methodology for constructing and maintaining large scale knowledge bases which can be used to support multiple applications. To insure the validity of its results, this research is being persued in the context of a real world system, the Hubble Space Telescope. The HSTDEK objectives are described in detail. The history and motivation of the project are briefly described. The technical challenges faced by the project are outlined.

  7. Inventory and analysis of natural vegetation and related resources from space and high altitude photography

    NASA Technical Reports Server (NTRS)

    Poulton, C. E.

    1972-01-01

    A multiple sampling technique was developed whereby spacecraft photographs supported by aircraft photographs could be used to quantify plant communities. Large scale (1:600 to 1:2,400) color infrared aerial photographs were required to identify shrub and herbaceous species. These photos were used to successfully estimate a herbaceous standing crop biomass. Microdensitometry was used to discriminate among specific plant communities and individual plant species. Large scale infrared photography was also used to estimate mule deer deaths and population density of northern pocket gophers.

  8. A time-accurate finite volume method valid at all flow velocities

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1993-01-01

    A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil. The calculated streaklines are in very good comparison with the experimentally obtained smoke picture. The calculated turbulent viscosity contours show that the transition from laminar to turbulent state and the relaminarization occur widely in space as well as in time. The ensemble-averaged velocity profiles are also in good agreement with the measured data and the good comparison indicates that the numerical method as well as the multipletime-scale turbulence equations successfully predict the unsteady transitional turbulence field. The chemical reactions for the hydrogen in the vitiated supersonic airstream are described using 9 chemical species and 48 reaction-steps. Consider that a fast chemistry can not be used to describe the fine details (such as the instability) of chemically reacting flows while a reduced chemical kinetics can not be used confidently due to the uncertainty contained in the reaction mechanisms. However, the use of a detailed finite rate chemistry may make it difficult to obtain a fully converged solution due to the coupling between the large number of flow, turbulence, and chemical equations. The numerical results obtained in the present study are in good agreement with the measured data. The good comparison is attributed to the numerical method that can yield strongly converged results for the reacting flow and to the use of the multiple-time-scale turbulence equations that can accurately describe the mixing of the fuel and the oxidant.

  9. What to do When Scalar Invariance Fails: The Extended Alignment Method for Multi-Group Factor Analysis Comparison of Latent Means Across Many Groups.

    PubMed

    Marsh, Herbert W; Guo, Jiesi; Parker, Philip D; Nagengast, Benjamin; Asparouhov, Tihomir; Muthén, Bengt; Dicke, Theresa

    2017-01-12

    Scalar invariance is an unachievable ideal that in practice can only be approximated; often using potentially questionable approaches such as partial invariance based on a stepwise selection of parameter estimates with large modification indices. Study 1 demonstrates an extension of the power and flexibility of the alignment approach for comparing latent factor means in large-scale studies (30 OECD countries, 8 factors, 44 items, N = 249,840), for which scalar invariance is typically not supported in the traditional confirmatory factor analysis approach to measurement invariance (CFA-MI). Importantly, we introduce an alignment-within-CFA (AwC) approach, transforming alignment from a largely exploratory tool into a confirmatory tool, and enabling analyses that previously have not been possible with alignment (testing the invariance of uniquenesses and factor variances/covariances; multiple-group MIMIC models; contrasts on latent means) and structural equation models more generally. Specifically, it also allowed a comparison of gender differences in a 30-country MIMIC AwC (i.e., a SEM with gender as a covariate) and a 60-group AwC CFA (i.e., 30 countries × 2 genders) analysis. Study 2, a simulation study following up issues raised in Study 1, showed that latent means were more accurately estimated with alignment than with the scalar CFA-MI, and particularly with partial invariance scalar models based on the heavily criticized stepwise selection strategy. In summary, alignment augmented by AwC provides applied researchers from diverse disciplines considerable flexibility to address substantively important issues when the traditional CFA-MI scalar model does not fit the data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Visualization of the Eastern Renewable Generation Integration Study: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron

    The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less

  11. Comparison of Species Sensitivity Distributions Derived from Interspecies Correlation Models to Distributions used to Derive Water Quality Criteria

    EPA Science Inventory

    Species sensitivity distributions (SSD) require a large number of measured toxicity values to define a chemical’s toxicity to multiple species. This investigation comprehensively evaluated the accuracy of SSDs generated from toxicity values predicted from interspecies correlation...

  12. Low speed tests of a fixed geometry inlet for a tilt nacelle V/STOL airplane

    NASA Technical Reports Server (NTRS)

    Syberg, J.; Koncsek, J. L.

    1977-01-01

    Test data were obtained with a 1/4 scale cold flow model of the inlet at freestream velocities from 0 to 77 m/s (150 knots) and angles of attack from 45 deg to 120 deg. A large scale model was tested with a high bypass ratio turbofan in the NASA/ARC wind tunnel. A fixed geometry inlet is a viable concept for a tilt nacelle V/STOL application. Comparison of data obtained with the two models indicates that flow separation at high angles of attack and low airflow rates is strongly sensitive to Reynolds number and that the large scale model has a significantly improved range of separation-free operation.

  13. Mach Number effects on turbulent superstructures in wall bounded flows

    NASA Astrophysics Data System (ADS)

    Kaehler, Christian J.; Bross, Matthew; Scharnowski, Sven

    2017-11-01

    Planer and three-dimensional flow field measurements along a flat plat boundary layer in the Trisonic Wind Tunnel Munich (TWM) are examined with the aim to characterize the scaling, spatial organization, and topology of large scale turbulent superstructures in compressible flow. This facility is ideal for this investigation as the ratio of boundary layer thickness to test section spanwise extent ratio is around 1/25, ensuring minimal sidewall and corner effects on turbulent structures in the center of the test section. A major difficulty in the experimental investigation of large scale features is the mutual size of the superstructures which can extend over many boundary layer thicknesses. Using multiple PIV systems, it was possible to capture the full spatial extent of large-scale structures over a range of Mach numbers from Ma = 0.3 - 3. To calculate the average large-scale structure length and spacing, the acquired vector fields were analyzed by statistical multi-point methods that show large scale structures with a correlation length of around 10 boundary layer thicknesses over the range of Mach numbers investigated. Furthermore, the average spacing between high and low momentum structures is on the order of a boundary layer thicknesses. This work is supported by the Priority Programme SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft.

  14. Comparison of Personal Resources in Patients Who Differently Estimate the Impact of Multiple Sclerosis.

    PubMed

    Wilski, Maciej; Tomczak, Maciej

    2017-04-01

    Discrepancies between physicians' assessment and patients' subjective representations of the disease severity may influence physician-patient communication and management of a chronic illness, such as multiple sclerosis (MS). For these reasons, it is important to recognize factors that distinguish patients who differently estimate the impact of MS. The purpose of this study was to verify if the patients who overestimate or underestimate the impact of MS differ in their perception of personal resources from individuals presenting with a realistic appraisal of their physical condition. A total of 172 women and 92 men diagnosed with MS completed Multiple Sclerosis Impact Scale, University of Washington Self Efficacy Scale, Rosenberg Self-Esteem Scale, Body Esteem Scale, Brief Illness Perception Questionnaire, Treatment Beliefs Scale, Actually Received Support Scale, and Socioeconomic resources scale. Physician's assessment of health status was determined with Expanded Disability Status Scale. Linear regression analysis was conducted to identify the subsets of patients with various patterns of subjective health and Expanded Disability Status Scale (EDSS) scores. Patients overestimating the impact of their disease presented with significantly lower levels of self-esteem, self-efficacy in MS, and body esteem; furthermore, they perceived their condition more threatening than did realists and underestimators. They also assessed anti-MS treatment worse, had less socioeconomic resources, and received less support than underestimators. Additionally, underestimators presented with significantly better perception of their disease, self, and body than did realists. Self-assessment of MS-related symptoms is associated with specific perception of personal resources in coping with the disease. These findings may facilitate communication with patients and point to new directions for future research on adaptation to MS.

  15. Improvements in cognition, quality of life, and physical performance with clinical Pilates in multiple sclerosis: a randomized controlled trial

    PubMed Central

    Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen

    2016-01-01

    [Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists. PMID:27134355

  16. Improvements in cognition, quality of life, and physical performance with clinical Pilates in multiple sclerosis: a randomized controlled trial.

    PubMed

    Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen

    2016-03-01

    [Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists.

  17. Supersonic jet noise generated by large scale instabilities

    NASA Technical Reports Server (NTRS)

    Seiner, J. M.; Mclaughlin, D. K.; Liu, C. H.

    1982-01-01

    The role of large scale wavelike structures as the major mechanism for supersonic jet noise emission is examined. With the use of aerodynamic and acoustic data for low Reynolds number, supersonic jets at and below 70 thousand comparisons are made with flow fluctuation and acoustic measurements in high Reynolds number, supersonic jets. These comparisons show that a similar physical mechanism governs the generation of sound emitted in he principal noise direction. These experimental data are further compared with a linear instability theory whose prediction for the axial location of peak wave amplitude agrees satisfactorily with measured phased averaged flow fluctuation data in the low Reynolds number jets. The agreement between theory and experiment in the high Reynolds number flow differs as to the axial location for peak flow fluctuations and predicts an apparent origin for sound emission far upstream of the measured acoustic data.

  18. Beyond Linear Sequence Comparisons: The use of genome-levelcharacters for phylogenetic reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boore, Jeffrey L.

    2004-11-27

    Although the phylogenetic relationships of many organisms have been convincingly resolved by the comparisons of nucleotide or amino acid sequences, others have remained equivocal despite great effort. Now that large-scale genome sequencing projects are sampling many lineages, it is becoming feasible to compare large data sets of genome-level features and to develop this as a tool for phylogenetic reconstruction that has advantages over conventional sequence comparisons. Although it is unlikely that these will address a large number of evolutionary branch points across the broad tree of life due to the infeasibility of such sampling, they have great potential for convincinglymore » resolving many critical, contested relationships for which no other data seems promising. However, it is important that we recognize potential pitfalls, establish reasonable standards for acceptance, and employ rigorous methodology to guard against a return to earlier days of scenario-driven evolutionary reconstructions.« less

  19. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    PubMed Central

    Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André

    2015-01-01

    We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform. PMID:26041985

  20. Comparison of concentric needle versus hooked-wire electrodes in the canine larynx.

    PubMed

    Jaffe, D M; Solomon, N P; Robinson, R A; Hoffman, H T; Luschei, E S

    1998-05-01

    The use of a specific electrode type in laryngeal electromyography has not been standardized. Laryngeal electromyography is usually performed with hooked-wire electrodes or concentric needle electrodes. Hooked-wire electrodes have the advantage of allowing laryngeal movement with ease and comfort, whereas the concentric needle electrodes have benefits from a technical aspect and may be advanced, withdrawn, or redirected during attempts to appropriately place the electrode. This study examines whether hooked-wire electrodes permit more stable recordings than standard concentric needle electrodes at rest and after large-scale movements of the larynx and surrounding structures. A histologic comparison of tissue injury resulting from placement and removal of the two electrode types is also made by evaluation of the vocal folds. Electrodes were percutaneously placed into the thyroarytenoid muscles of 10 adult canines. Amplitude of electromyographic activity was measured and compared during vagal stimulation before and after large-scale laryngeal movements. Signal consistency over time was examined. Animals were killed and vocal fold injury was graded and compared histologically. Waveform morphology did not consistently differ between electrode types. The variability of electromyographic amplitude was greater for the hooked-wire electrode (p < 0.05), whereas the mean amplitude measures before and after large-scale laryngeal movements did not differ (p > 0.05). Inflammatory responses and hematoma formation were also similar. Waveform morphology of electromyographic signals registered from both electrode types show similar complex action potentials. There is no difference between the hooked-wire electrode and the concentric needle electrode in terms of electrode stability or vocal fold injury in the thyroarytenoid muscle after large-scale laryngeal movements.

Top