Sample records for large-scale performance studies

  1. Architectural Optimization of Digital Libraries

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  2. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  3. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  4. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  5. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  6. Performance of Extended Local Clustering Organization (LCO) for Large Scale Job-Shop Scheduling Problem (JSP)

    NASA Astrophysics Data System (ADS)

    Konno, Yohko; Suzuki, Keiji

    This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.

  7. Application of multivariate analysis and mass transfer principles for refinement of a 3-L bioreactor scale-down model--when shake flasks mimic 15,000-L bioreactors better.

    PubMed

    Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa

    2015-01-01

    Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.

  8. The Role of Reading Comprehension in Large-Scale Subject-Matter Assessments

    ERIC Educational Resources Information Center

    Zhang, Ting

    2013-01-01

    This study was designed with the overall goal of understanding how difficulties in reading comprehension are associated with early adolescents' performance in large-scale assessments in subject domains including science and civic-related social studies. The current study extended previous research by taking a cognition-centered approach based on…

  9. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework against GPM, TRMM and CloudSat/CALIPSO Products

    NASA Astrophysics Data System (ADS)

    Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.

    2014-12-01

    Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.

  10. Requirements for a mobile communications satellite system. Volume 3: Large space structures measurements study

    NASA Technical Reports Server (NTRS)

    Akle, W.

    1983-01-01

    This study report defines a set of tests and measurements required to characterize the performance of a Large Space System (LSS), and to scale this data to other LSS satellites. Requirements from the Mobile Communication Satellite (MSAT) configurations derived in the parent study were used. MSAT utilizes a large, mesh deployable antenna, and encompasses a significant range of LSS technology issues in the areas of structural/dynamics, control, and performance predictability. In this study, performance requirements were developed for the antenna. Special emphasis was placed on antenna surface accuracy, and pointing stability. Instrumentation and measurement systems, applicable to LSS, were selected from existing or on-going technology developments. Laser ranging and angulation systems, presently in breadboard status, form the backbone of the measurements. Following this, a set of ground, STS, and GEO-operational were investigated. A third scale (15 meter) antenna system as selected for ground characterization followed by STS flight technology development. This selection ensures analytical scaling from ground-to-orbit, and size scaling. Other benefits are cost and ability to perform reasonable ground tests. Detail costing of the various tests and measurement systems were derived and are included in the report.

  11. NAEP Validity Studies: Improving the Information Value of Performance Items in Large Scale Assessments. Working Paper No. 2003-08

    ERIC Educational Resources Information Center

    Pearson, P. David; Garavaglia, Diane R.

    2003-01-01

    The purpose of this essay is to explore both what is known and what needs to be learned about the information value of performance items "when they are used in large scale assessments." Within the context of the National Assessment of Educational Progress (NAEP), there is substantial motivation for answering these questions. Over the…

  12. Cohort Profile of the Goals Study: A Large-Scale Research of Physical Activity in Dutch Students

    ERIC Educational Resources Information Center

    de Groot, Renate H. M.; van Dijk, Martin L.; Kirschner, Paul A.

    2015-01-01

    The GOALS study (Grootschalig Onderzoek naar Activiteiten van Limburgse Scholieren [Large-scale Research of Activities in Dutch Students]) was set up to investigate possible associations between different forms of physical activity and inactivity with cognitive performance, academic achievement and mental well-being. It was conducted at a…

  13. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  14. Achieving a Successful Scale-Down Model and Optimized Economics through Parvovirus Filter Validation using Purified TrueSpikeTM Viruses.

    PubMed

    De Vilmorin, Philippe; Slocum, Ashley; Jaber, Tareq; Schaefer, Oliver; Ruppach, Horst; Genest, Paul

    2015-01-01

    This article describes a four virus panel validation of EMD Millipore's (Bedford, MA) small virus-retentive filter, Viresolve® Pro, using TrueSpike(TM) viruses for a Biogen Idec process intermediate. The study was performed at Charles River Labs in King of Prussia, PA. Greater than 900 L/m(2) filter throughput was achieved with the approximately 8 g/L monoclonal antibody feed. No viruses were detected in any filtrate samples. All virus log reduction values were between ≥3.66 and ≥5.60. The use of TrueSpike(TM) at Charles River Labs allowed Biogen Idec to achieve a more representative scaled-down model and potentially reduce the cost of its virus filtration step and the overall cost of goods. The body of data presented here is an example of the benefits of following the guidance from the PDA Technical Report 47, The Preparation of Virus Spikes Used for Viral Clearance Studies. The safety of biopharmaceuticals is assured through the use of multiple steps in the purification process that are capable of virus clearance, including filtration with virus-retentive filters. The amount of virus present at the downstream stages in the process is expected to be and is typically low. The viral clearance capability of the filtration step is assessed in a validation study. The study utilizes a small version of the larger manufacturing size filter, and a large, known amount of virus is added to the feed prior to filtration. Viral assay before and after filtration allows the virus log reduction value to be quantified. The representativeness of the small-scale model is supported by comparing large-scale filter performance to small-scale filter performance. The large-scale and small-scale filtration runs are performed using the same operating conditions. If the filter performance at both scales is comparable, it supports the applicability of the virus log reduction value obtained with the small-scale filter to the large-scale manufacturing process. However, the virus preparation used to spike the feed material often contains impurities that contribute adversely to virus filter performance in the small-scale model. The added impurities from the virus spike, which are not present at manufacturing scale, compromise the scale-down model and put into question the direct applicability of the virus clearance results. Another consequence of decreased filter performance due to virus spike impurities is the unnecessary over-sizing of the manufacturing system to match the low filter capacity observed in the scale-down model. This article describes how improvements in mammalian virus spike purity ensure the validity of the log reduction value obtained with the scale-down model and support economically optimized filter usage. © PDA, Inc. 2015.

  15. Performance/price estimates for cortex-scale hardware: a design space exploration.

    PubMed

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  17. Paying for performance: Performance incentives increase desire for the reward object.

    PubMed

    Hur, Julia D; Nordgren, Loran F

    2016-09-01

    The current research examines how exposure to performance incentives affects one's desire for the reward object. We hypothesized that the flexible nature of performance incentives creates an attentional fixation on the reward object (e.g., money), which leads people to become more desirous of the rewards. Results from 5 laboratory experiments and 1 large-scale field study provide support for this prediction. When performance was incentivized with monetary rewards, participants reported being more desirous of money (Study 1), put in more effort to earn additional money in an ensuing task (Study 2), and were less willing to donate money to charity (Study 4). We replicated the result with nonmonetary rewards (Study 5). We also found that performance incentives increased attention to the reward object during the task, which in part explains the observed effects (Study 6). A large-scale field study replicated these findings in a real-world setting (Study 7). One laboratory experiment failed to replicate (Study 3). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  19. Development of Novel Therapeutics for Neglected Tropical Disease Leishmaniasis

    DTIC Science & Technology

    2015-10-01

    Approved for public release; distribution unlimited We undertook planning of kick off coordination meeting. A low dose infection model of CL was validated...A large scale synthesis of PEN optimized and in vitro studies were performed revealed that PEN alters parasite lipidome. Further studies were...Pentalinonsterol, Leishmania, cutaneous leishmaniasis, treatment Accomplishments • Undertook planning of kick off coordination meeting • Large scale synthesis of

  20. Performance of Aqueous Film Forming Foam (AFFF) on Large-Scale Hydroprocessed Renewable Jet (HRJ) Fuel Fires

    DTIC Science & Technology

    2011-12-01

    aqueous film forming foam ( AFFF ) firefighting agents and equipment are capable of...AFRL-RX-TY-TR-2012-0012 PERFORMANCE OF AQUEOUS FILM FORMING FOAM ( AFFF ) ON LARGE-SCALE HYDROPROCESSED RENEWABLE JET (HRJ) FUEL FIRES...Performance of Aqueous Film Forming Foam ( AFFF ) on Large-Scale Hydroprocessed Renewable Jet (HRJ) Fuel Fires FA4819-09-C-0030 0602102F 4915 D0

  1. A study on the required performance of a 2G HTS wire for HTS wind power generators

    NASA Astrophysics Data System (ADS)

    Sung, Hae-Jin; Park, Minwon; Go, Byeong-Soo; Yu, In-Keun

    2016-05-01

    YBCO or REBCO coated conductor (2G) materials are developed for their superior performance at high magnetic field and temperature. Power system applications based on high temperature superconducting (HTS) 2G wire technology are attracting attention, including large-scale wind power generators. In particular, to solve problems associated with the foundations and mechanical structure of offshore wind turbines, due to the large diameter and heavy weight of the generator, an HTS generator is suggested as one of the key technologies. Many researchers have tried to develop feasible large-scale HTS wind power generator technologies. In this paper, a study on the required performance of a 2G HTS wire for large-scale wind power generators is discussed. A 12 MW class large-scale wind turbine and an HTS generator are designed using 2G HTS wire. The total length of the 2G HTS wire for the 12 MW HTS generator is estimated, and the essential prerequisites of the 2G HTS wire based generator are described. The magnetic field distributions of a pole module are illustrated, and the mechanical stress and strain of the pole module are analysed. Finally, a reasonable price for 2G HTS wire for commercialization of the HTS generator is suggested, reflecting the results of electromagnetic and mechanical analyses of the generator.

  2. High-Stakes Accountability: Student Anxiety and Large-Scale Testing

    ERIC Educational Resources Information Center

    von der Embse, Nathaniel P.; Witmer, Sara E.

    2014-01-01

    This study examined the relationship between student anxiety about high-stakes testing and their subsequent test performance. The FRIEDBEN Test Anxiety Scale was administered to 1,134 11th-grade students, and data were subsequently collected on their statewide assessment performance. Test anxiety was a significant predictor of test performance…

  3. Stereotype Threat, Inquiring about Test Takers' Race and Gender, and Performance on Low-Stakes Tests in a Large-Scale Assessment. Research Report. ETS RR-15-02

    ERIC Educational Resources Information Center

    Stricker, Lawrence J.; Rock, Donald A.; Bridgeman, Brent

    2015-01-01

    This study explores stereotype threat on low-stakes tests used in a large-scale assessment, math and reading tests in the Education Longitudinal Study of 2002 (ELS). Issues identified in laboratory research (though not observed in studies of high-stakes tests) were assessed: whether inquiring about their race and gender is related to the…

  4. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    PubMed

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  5. A comparative study of all-vanadium and iron-chromium redox flow batteries for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.

    2015-12-01

    The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.

  6. Large-Scale Investigation of the Role of Trait Activation Theory for Understanding Assessment Center Convergent and Discriminant Validity

    ERIC Educational Resources Information Center

    Lievens, Filip; Chasteen, Christopher S.; Day, Eric Anthony; Christiansen, Neil D.

    2006-01-01

    This study used trait activation theory as a theoretical framework to conduct a large-scale test of the interactionist explanation of the convergent and discriminant validity findings obtained in assessment centers. Trait activation theory specifies the conditions in which cross-situationally consistent and inconsistent candidate performances are…

  7. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  8. Large-Scale Brain Network Coupling Predicts Total Sleep Deprivation Effects on Cognitive Capacity

    PubMed Central

    Wang, Lubin; Zhai, Tianye; Zou, Feng; Ye, Enmao; Jin, Xiao; Li, Wuju; Qi, Jianlin; Yang, Zheng

    2015-01-01

    Interactions between large-scale brain networks have received most attention in the study of cognitive dysfunction of human brain. In this paper, we aimed to test the hypothesis that the coupling strength of large-scale brain networks will reflect the pressure for sleep and will predict cognitive performance, referred to as sleep pressure index (SPI). Fourteen healthy subjects underwent this within-subject functional magnetic resonance imaging (fMRI) study during rested wakefulness (RW) and after 36 h of total sleep deprivation (TSD). Self-reported scores of sleepiness were higher for TSD than for RW. A subsequent working memory (WM) task showed that WM performance was lower after 36 h of TSD. Moreover, SPI was developed based on the coupling strength of salience network (SN) and default mode network (DMN). Significant increase of SPI was observed after 36 h of TSD, suggesting stronger pressure for sleep. In addition, SPI was significantly correlated with both the visual analogue scale score of sleepiness and the WM performance. These results showed that alterations in SN-DMN coupling might be critical in cognitive alterations that underlie the lapse after TSD. Further studies may validate the SPI as a potential clinical biomarker to assess the impact of sleep deprivation. PMID:26218521

  9. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  10. Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Lin, Paul Tinphone

    2009-01-01

    This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less

  11. Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive

    NASA Astrophysics Data System (ADS)

    Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.

    2012-03-01

    High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.

  12. Experimental feasibility study of the application of magnetic suspension techniques to large-scale aerodynamic test facilities

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1974-01-01

    Based on the premises that (1) magnetic suspension techniques can play a useful role in large-scale aerodynamic testing and (2) superconductor technology offers the only practical hope for building large-scale magnetic suspensions, an all-superconductor three-component magnetic suspension and balance facility was built as a prototype and was tested successfully. Quantitative extrapolations of design and performance characteristics of this prototype system to larger systems compatible with existing and planned high Reynolds number facilities have been made and show that this experimental technique should be particularly attractive when used in conjunction with large cryogenic wind tunnels.

  13. Experimental feasibility study of the application of magnetic suspension techniques to large-scale aerodynamic test facilities. [cryogenic traonics wind tunnel

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1975-01-01

    Based on the premises that magnetic suspension techniques can play a useful role in large scale aerodynamic testing, and that superconductor technology offers the only practical hope for building large scale magnetic suspensions, an all-superconductor 3-component magnetic suspension and balance facility was built as a prototype and tested sucessfully. Quantitative extrapolations of design and performance characteristics of this prototype system to larger systems compatible with existing and planned high Reynolds number facilities at Langley Research Center were made and show that this experimental technique should be particularly attractive when used in conjunction with large cryogenic wind tunnels.

  14. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  15. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  16. Field-aligned currents' scale analysis performed with the Swarm constellation

    NASA Astrophysics Data System (ADS)

    Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter

    2015-01-01

    We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.

  17. Photographic images captured while sampling for bald eagles near the Davis Pond freshwater diversion structure in Barataria Bay, Louisiana (2009-10)

    USGS Publications Warehouse

    Jenkins, Jill A.; Jeske, Clinton W.; Allain, Larry K.

    2011-01-01

    The implementation of freshwater diversions in large-scale coastal restoration schemes presents several scientific and management considerations. Large-scale environmental restructuring necessitates aquatic biomonitoring, and during such field studies, photographs that document animals and habitat may be captured. Among the biomonitoring studies performed in conjunction with the Davis Pond freshwater diversion structure south of New Orleans, Louisiana, only postdiversion study images are readily available, and these are presented here.

  18. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  19. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  20. Study of LANDSAT-D thematic mapper performance as applied to hydrocarbon exploration

    NASA Technical Reports Server (NTRS)

    Everett, J. R. (Principal Investigator)

    1983-01-01

    Two fully processed test tapes were enhanced and evaluated at scales up to 1:10,000, using both hardcopy output and interactive screen display. A large scale, the Detroit, Michigan scene shows evidence of an along line data slip every sixteenth line in TM channel 2. Very large scale products generated in false color using channels 1,3, and 4 should be very acceptable for interpretation at scales up to 1:50,000 and useful for change mapping probably up to scale 1:24,000. Striping visible in water bodies for both natural and color products indicates that the detector calibration is probably performing below preflight specification. For a set of 512 x 512 windows within the NE Arkansas scene, the variance-covariance matrices were computed and principal component analyses performed. Initial analysis suggests that the shortwave infrared TM 5 and 6 channels are a highly significant data source. The thermal channel (TM 7) shows negative correlation with TM 1 and 4.

  1. A Large-Scale Inquiry-Based Astronomy Intervention Project: Impact on Students' Content Knowledge Performance and Views of Their High School Science Classroom

    ERIC Educational Resources Information Center

    Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena; Deehan, James

    2016-01-01

    In this paper, we present the results from a study of the impact on students involved in a large-scale inquiry-based astronomical high school education intervention in Australia. Students in this intervention were led through an educational design allowing them to undertake an investigative approach to understanding the lifecycle of stars more…

  2. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  3. Newly invented biobased materials from low-carbon, diverted waste fibers: research methods, testing, and full-scale application in a case study structure

    Treesearch

    Julee A Herdt; John Hunt; Kellen Schauermann

    2016-01-01

    This project demonstrates newly invented, biobased construction materials developed by applying lowcarbon, biomass waste sources through the Authors’ engineered fiber processes and technology. If manufactured and applied large-scale the project inventions can divert large volumes of cellulose waste into high-performance, low embodied energy, environmental construction...

  4. Large scale shell model study of the evolution of mixed-symmetry states in chains of nuclei around 132Sn

    NASA Astrophysics Data System (ADS)

    Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.

    2012-10-01

    Large scale shell model calculations based on a new diagonalization algorithm are performed in order to investigate the mixed symmetry states in chains of nuclei in the proximity of N=82. The resulting spectra and transitions are in agreement with the experiments and consistent with the scheme provided by the interacting boson model.

  5. Impact of spatial variability and sampling design on model performance

    NASA Astrophysics Data System (ADS)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.

  6. A holistic approach for large-scale derived flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno

    2017-04-01

    Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.

  7. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  8. Sex differences in virtual navigation influenced by scale and navigation experience.

    PubMed

    Padilla, Lace M; Creem-Regehr, Sarah H; Stefanucci, Jeanine K; Cashdan, Elizabeth A

    2017-04-01

    The Morris water maze is a spatial abilities test adapted from the animal spatial cognition literature and has been studied in the context of sex differences in humans. This is because its standard design, which manipulates proximal (close) and distal (far) cues, applies to human navigation. However, virtual Morris water mazes test navigation skills on a scale that is vastly smaller than natural human navigation. Many researchers have argued that navigating in large and small scales is fundamentally different, and small-scale navigation might not simulate natural human navigation. Other work has suggested that navigation experience could influence spatial skills. To address the question of how individual differences influence navigational abilities in differently scaled environments, we employed both a large- (146.4 m in diameter) and a traditional- (36.6 m in diameter) scaled virtual Morris water maze along with a novel measure of navigation experience (lifetime mobility). We found sex differences on the small maze in the distal cue condition only, but in both cue-conditions on the large maze. Also, individual differences in navigation experience modulated navigation performance on the virtual water maze, showing that higher mobility was related to better performance with proximal cues for only females on the small maze, but for both males and females on the large maze.

  9. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    PubMed

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  10. Information Tailoring Enhancements for Large-Scale Social Data

    DTIC Science & Technology

    2016-06-15

    Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale... Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...1 Work Performed within This Reporting Period .................................................... 2 1.1 Enhanced Named Entity Recognition (NER

  11. Performance Characterization of Global Address Space Applications: A Case Study with NWChem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Jeffrey R.; Krishnamoorthy, Sriram; Shende, Sameer

    The use of global address space languages and one-sided communication for complex applications is gaining attention in the parallel computing community. However, lack of good evaluative methods to observe multiple levels of performance makes it difficult to isolate the cause of performance deficiencies and to understand the fundamental limitations of system and application design for future improvement. NWChem is a popular computational chemistry package which depends on the Global Arrays/ ARMCI suite for partitioned global address space functionality to deliver high-end molecular modeling capabilities. A workload characterization methodology was developed to support NWChem performance engineering on large-scale parallel platforms. Themore » research involved both the integration of performance instrumentation and measurement in the NWChem software, as well as the analysis of one-sided communication performance in the context of NWChem workloads. Scaling studies were conducted for NWChem on Blue Gene/P and on two large-scale clusters using different generation Infiniband interconnects and x86 processors. The performance analysis and results show how subtle changes in the runtime parameters related to the communication subsystem could have significant impact on performance behavior. The tool has successfully identified several algorithmic bottlenecks which are already being tackled by computational chemists to improve NWChem performance.« less

  12. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    PubMed

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzi, Silvio; Hereld, Mark; Insley, Joseph

    In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less

  14. Spatial Structure of Large-Scale Plasma Density Perturbations HF-Induced in the Ionospheric F 2 Region

    NASA Astrophysics Data System (ADS)

    Frolov, V. L.; Komrakov, G. P.; Glukhov, Ya. V.; Andreeva, E. S.; Kunitsyn, V. E.; Kurbatov, G. A.

    2016-07-01

    We consider the experimental results obtained by studying the large-scale structure of the HF-disturbed ionospheric region. The experiments were performed using the SURA heating facility. The disturbed ionospheric region was sounded by signals radiated by GPS navigation satellite beacons as well as by signals of low-orbit satellites (radio tomography). The results of the experiments show that large-scale plasma density perturbations induced at altitudes higher than the F2 layer maximum can contribute significantly to the measured variations of the total electron density and can, with a certain arrangement of the reception points, be measured by the GPS sounding method.

  15. Unimanual Performance across the Age Span

    ERIC Educational Resources Information Center

    Bryden, P.J.; Roy, E.A.

    2005-01-01

    The purpose of the current investigation was to examine the age-related changes in the performance of the two hands on the Annett pegboard (Annett, 1970). The current study was part of a large-scale study investigating the development of unimanual and bimanual performance. Three hundred and two right-handed individuals participated in the present…

  16. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  17. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  18. High-Throughput Microbore UPLC-MS Metabolic Phenotyping of Urine for Large-Scale Epidemiology Studies.

    PubMed

    Gray, Nicola; Lewis, Matthew R; Plumb, Robert S; Wilson, Ian D; Nicholson, Jeremy K

    2015-06-05

    A new generation of metabolic phenotyping centers are being created to meet the increasing demands of personalized healthcare, and this has resulted in a major requirement for economical, high-throughput metabonomic analysis by liquid chromatography-mass spectrometry (LC-MS). Meeting these new demands represents an emerging bioanalytical problem that must be solved if metabolic phenotyping is to be successfully applied to large clinical and epidemiological sample sets. Ultraperformance (UP)LC-MS-based metabolic phenotyping, based on 2.1 mm i.d. LC columns, enables comprehensive metabolic phenotyping but, when employed for the analysis of thousands of samples, results in high solvent usage. The use of UPLC-MS employing 1 mm i.d. columns for metabolic phenotyping rather than the conventional 2.1 mm i.d. methodology shows that the resulting optimized microbore method provided equivalent or superior performance in terms of peak capacity, sensitivity, and robustness. On average, we also observed, when using the microbore scale separation, an increase in response of 2-3 fold over that obtained with the standard 2.1 mm scale method. When applied to the analysis of human urine, the 1 mm scale method showed no decline in performance over the course of 1000 analyses, illustrating that microbore UPLC-MS represents a viable alternative to conventional 2.1 mm i.d. formats for routine large-scale metabolic profiling studies while also resulting in a 75% reduction in solvent usage. The modest increase in sensitivity provided by this methodology also offers the potential to either reduce sample consumption or increase the number of metabolite features detected with confidence due to the increased signal-to-noise ratios obtained. Implementation of this miniaturized UPLC-MS method of metabolic phenotyping results in clear analytical, economic, and environmental benefits for large-scale metabolic profiling studies with similar or improved analytical performance compared to conventional UPLC-MS.

  19. Examiners and Content and Site: Oh My! a National Organization's Investigation of Score Variation in Large-Scale Performance Assessments

    ERIC Educational Resources Information Center

    Sebok, Stefanie S.; Roy, Marguerite; Klinger, Don A.; De Champlain, André F.

    2015-01-01

    Examiner effects and content specificity are two well known sources of construct irrelevant variance that present great challenges in performance-based assessments. National medical organizations that are responsible for large-scale performance based assessments experience an additional challenge as they are responsible for administering…

  20. Supermassive Black Hole Binaries in High Performance Massively Parallel Direct N-body Simulations on Large GPU Clusters

    NASA Astrophysics Data System (ADS)

    Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.

    2012-07-01

    Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.

  1. On the validity of self-report assessment of cognitive abilities: Attentional control scale associations with cognitive performance, emotional adjustment, and personality.

    PubMed

    Williams, Paula G; Rau, Holly K; Suchy, Yana; Thorgusen, Sommer R; Smith, Timothy W

    2017-05-01

    Individual differences in attentional control involve the ability to voluntarily direct, shift, and sustain attention. In studies of the role of attentional control in emotional adjustment, social relationships, and vulnerability to the effects of stress, self-report questionnaires are commonly used to measure this construct. Yet, convincing evidence of the association between self-report scales and actual cognitive performance has not been demonstrated. Across 2 independent samples, we examined associations between self-reported attentional control (Attentional Control Scale; ACS), self-reported emotional adjustment, Five-Factor Model personality traits (NEO Personality Inventory-Revised) and performance measures of attentional control. Study 1 examined behavioral performance on the Attention Network Test (ANT; Fan, McCandliss, Sommer, Raz, & Posner, 2002) and the Modified Switching Task (MST; Suchy & Kosson, 2006) in a large sample (n = 315) of healthy young adults. Study 2 (n = 78) examined behavioral performance on standardized neuropsychological tests of attention, including Conner's Continuous Performance Test-II and subtests from the Wechsler Adult Intelligence Scales, Third Edition (WAIS-III; Psychological Corporation, 1997) and Delis-Kaplan Executive Function System (D-KEFS; Delis, Kaplan, & Kramer, 2001). Results indicated that the ACS was largely unrelated to behavioral performance measures of attentional control but was significantly associated with emotional adjustment, neuroticism, and conscientiousness. These findings suggest that although self-reported attentional control may be a useful construct, researchers using the ACS should exercise caution in interpreting it as a proxy for actual cognitive ability or performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models

    NASA Technical Reports Server (NTRS)

    Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.

    2018-01-01

    The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.

  3. Predicting the propagation of concentration and saturation fronts in fixed-bed filters.

    PubMed

    Callery, O; Healy, M G

    2017-10-15

    The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  5. Simulations of hypervelocity impacts for asteroid deflection studies

    NASA Astrophysics Data System (ADS)

    Heberling, T.; Ferguson, J. M.; Gisler, G. R.; Plesko, C. S.; Weaver, R.

    2016-12-01

    The possibility of kinetic-impact deflection of threatening near-earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving two independent spacecraft, NASAs DART (Double Asteroid Redirection Test) and ESAs AIM (Asteroid Impact Mission). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos, at a speed of 5 to 7 km/s, is expected to alter the mutual orbit by an observable amount. The velocity imparted to the secondary depends on the geometry and dynamics of the impact, and especially on the momentum enhancement factor, conventionally called beta. We use the Los Alamos hydrocodes Rage and Pagosa to estimate beta in laboratory-scale benchmark experiments and in the large-scale asteroid deflection test. Simulations are performed in two- and three-dimensions, using a variety of equations of state and strength models for both the lab-scale and large-scale cases. This work is being performed as part of a systematic benchmarking study for the AIDA mission that includes other hydrocodes.

  6. Scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted cylindrical element

    NASA Astrophysics Data System (ADS)

    Tang, Zhanqi; Jiang, Nan

    2018-05-01

    This study reports the modifications of scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted circular cylinder. Hot-wire measurements were executed at multiple streamwise and wall-normal wise locations downstream of the cylindrical element. The streamwise fluctuating signals were decomposed into large-, small-, and dissipative-scale signatures by corresponding cutoff filters. The scale interaction under the cylindrical perturbation was elaborated by comparing the small- and dissipative-scale amplitude/frequency modulation effects downstream of the cylinder element with the results observed in the unperturbed case. It was obtained that the large-scale fluctuations perform a stronger amplitude modulation on both the small and dissipative scales in the near-wall region. At the wall-normal positions of the cylinder height, the small-scale amplitude modulation coefficients are redistributed by the cylinder wake. The similar observation was noted in small-scale frequency modulation; however, the dissipative-scale frequency modulation seems to be independent of the cylindrical perturbation. The phase-relationship observation indicated that the cylindrical perturbation shortens the time shifts between both the small- and dissipative-scale variations (amplitude and frequency) and large-scale fluctuations. Then, the integral time scale dependence of the phase-relationship between the small/dissipative scales and large scales was also discussed. Furthermore, the discrepancy of small- and dissipative-scale time shifts relative to the large-scale motions was examined, which indicates that the small-scale amplitude/frequency leads the dissipative scales.

  7. Users matter : multi-agent systems model of high performance computing cluster users.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Hood, C. S.; Decision and Information Sciences

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less

  8. Evaluation of the reliability and validity for X16 balance testing scale for the elderly.

    PubMed

    Ju, Jingjuan; Jiang, Yu; Zhou, Peng; Li, Lin; Ye, Xiaolei; Wu, Hongmei; Shen, Bin; Zhang, Jialei; He, Xiaoding; Niu, Chunjin; Xia, Qinghua

    2018-05-10

    Balance performance is considered as an indicator of functional status in the elderly, a large scale population screening and evaluation in the community context followed by proper interventions would be of great significance at public health level. However, there has been no suitable balance testing scale available for large scale studies in the unique community context of urban China. A balance scale named X16 balance testing scale was developed, which was composed of 3 domains and 16 items. A total of 1985 functionally independent and active community-dwelling elderly adults' balance abilities were tested using the X16 scale. The internal consistency, split-half reliability, content validity, construct validity, discriminant validity of X16 balance testing scale were evaluated. Factor analysis was performed to identify alternative factor structure. The Eigenvalues of factors 1, 2, and 3 were 8.53, 1.79, and 1.21, respectively, and their cumulative contribution to the total variance reached 72.0%. These 3 factors mainly represented domains static balance, postural stability, and dynamic balance. The Cronbach alpha coefficient for the scale was 0.933. The Spearman correlation coefficients between items and its corresponding domains were ranged from 0.538 to 0.964. The correlation coefficients between each item and its corresponding domain were higher than the coefficients between this item and other domains. With the increase of age, the scores of balance performance, domains static balance, postural stability, and dynamic balance in the elderly declined gradually (P < 0.001). With the increase of age, the proportion of the elderly with intact balance performance decreased gradually (P < 0.001). The reliability and validity of the X16 balance testing scale is both adequate and acceptable. Due to its simple and quick use features, it is practical to be used repeatedly and routinely especially in community setting and on large scale screening.

  9. Attributes and Behaviors of Performance-Centered Systems.

    ERIC Educational Resources Information Center

    Gery, Gloria

    1995-01-01

    Examines attributes, characteristics, and behaviors of performance-centered software packages that are emerging in the consumer software marketplace and compares them with large-scale systems software being designed by internal information systems staffs and vendors of large-scale software designed for financial, manufacturing, processing, and…

  10. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  11. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies

    PubMed Central

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948

  12. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies.

    PubMed

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.

  13. Ecological Regional Analysis Applied to Campus Sustainability Performance

    ERIC Educational Resources Information Center

    Weber, Shana; Newman, Julie; Hill, Adam

    2017-01-01

    Purpose: Sustainability performance in higher education is often evaluated at a generalized large scale. It remains unknown to what extent campus efforts address regional sustainability needs. This study begins to address this gap by evaluating trends in performance through the lens of regional environmental characteristics.…

  14. The impact of pH inhomogeneities on CHO cell physiology and fed-batch process performance - two-compartment scale-down modelling and intracellular pH excursion.

    PubMed

    Brunner, Matthias; Braun, Philipp; Doppler, Philipp; Posch, Christoph; Behrens, Dirk; Herwig, Christoph; Fricke, Jens

    2017-07-01

    Due to high mixing times and base addition from top of the vessel, pH inhomogeneities are most likely to occur during large-scale mammalian processes. The goal of this study was to set-up a scale-down model of a 10-12 m 3 stirred tank bioreactor and to investigate the effect of pH perturbations on CHO cell physiology and process performance. Short-term changes in extracellular pH are hypothesized to affect intracellular pH and thus cell physiology. Therefore, batch fermentations, including pH shifts to 9.0 and 7.8, in regular one-compartment systems are conducted. The short-term adaption of the cells intracellular pH are showed an immediate increase due to elevated extracellular pH. With this basis of fundamental knowledge, a two-compartment system is established which is capable of simulating defined pH inhomogeneities. In contrast to state-of-the-art literature, the scale-down model is included parameters (e.g. volume of the inhomogeneous zone) as they might occur during large-scale processes. pH inhomogeneity studies in the two-compartment system are performed with simulation of temporary pH zones of pH 9.0. The specific growth rate especially during the exponential growth phase is strongly affected resulting in a decreased maximum viable cell density and final product titer. The gathered results indicate that even short-term exposure of cells to elevated pH values during large-scale processes can affect cell physiology and overall process performance. In particular, it could be shown for the first time that pH perturbations, which might occur during the early process phase, have to be considered in scale-down models of mammalian processes. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Simulating Daily and Sub-daily Water Flow in Large, Semi-arid Watershed Using SWAT: A Case Study of Nueces River Basin, Texas

    NASA Astrophysics Data System (ADS)

    Bassam, S.; Ren, J.

    2015-12-01

    Runoff generated during heavy rainfall imposes quick, but often intense, changes in the flow of streams, which increase the chance of flash floods in the vicinity of the streams. Understanding the temporal response of streams to heavy rainfall requires a hydrological model that considers meteorological, hydrological, and geological components of the streams and their watersheds. SWAT is a physically-based, semi-distributed model that is capable of simulating water flow within watersheds with both long-term, i.e. annually and monthly, and short-term (daily and sub-daily) time scales. However, the capability of SWAT in sub-daily water flow modeling within large watersheds has not been studied much, compare to long-term and daily time scales. In this study we are investigating the water flow in a large, semi-arid watershed, Nueces River Basin (NRB) with the drainage area of 16950 mi2 located in South Texas, with daily and sub-daily time scales. The objectives of this study are: (1) simulating the response of streams to heavy, and often quick, rainfall, (2) evaluating SWAT performance in sub-daily modeling of water flow within a large watershed, and (3) examining means for model performance improvement during model calibration and verification based on results of sensitivity and uncertainty analysis. The results of this study can provide important information for water resources planning during flood seasons.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallarno, George; Rogers, James H; Maxwell, Don E

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learnedmore » in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.« less

  17. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    PubMed Central

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  18. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    PubMed

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  19. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.

  20. Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Crockett, Thomas W.

    1999-01-01

    This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.

  1. Food waste impact on municipal solid waste angle of internal friction.

    PubMed

    Cho, Young Min; Ko, Jae Hac; Chi, Liqun; Townsend, Timothy G

    2011-01-01

    The impact of food waste content on the municipal solid waste (MSW) friction angle was studied. Using reconstituted fresh MSW specimens with different food waste content (0%, 40%, 58%, and 80%), 48 small-scale (100-mm-diameter) direct shear tests and 12 large-scale (430 mm × 430 mm) direct shear tests were performed. A stress-controlled large-scale direct shear test device allowing approximately 170-mm sample horizontal displacement was designed and used. At both testing scales, the mobilized internal friction angle of MSW decreased considerably as food waste content increased. As food waste content increased from 0% to 40% and from 40% to 80%, the mobilized internal friction angles (estimated using the mobilized peak (ultimate) shear strengths of the small-scale direct shear tests) decreased from 39° to 31° and from 31° to 7°, respectively, while those of large-scale tests decreased from 36° to 26° and from 26° to 15°, respectively. Most friction angle measurements produced in this study fell within the range of those previously reported for MSW. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. MAINTAINING DATA QUALITY IN THE PERFORMANCE OF A LARGE SCALE INTEGRATED MONITORING EFFORT

    EPA Science Inventory

    Macauley, John M. and Linda C. Harwell. In press. Maintaining Data Quality in the Performance of a Large Scale Integrated Monitoring Effort (Abstract). To be presented at EMAP Symposium 2004: Integrated Monitoring and Assessment for Effective Water Quality Management, 3-7 May 200...

  3. Large voltage modulation in superconducting quantum interference devices with submicron-scale step-edge junctions

    NASA Astrophysics Data System (ADS)

    Lam, Simon K. H.

    2017-09-01

    A promising direction to improve the sensitivity of a SQUID is to increase its junction's normal resistance value, Rn, as the SQUID modulation voltage scales linearly with Rn. As a first step to develop highly sensitive single layer SQUID, submicron scale YBCO grain boundary step edge junctions and SQUIDs with large Rn were fabricated and studied. The step-edge junctions were reduced to submicron scale to increase their Rn values using focus ion beam, FIB and the measurement of transport properties were performed from 4.3 to 77 K. The FIB induced deposition layer proves to be effective to minimize the Ga ion contamination during the FIB milling process. The critical current-normal resistance value of submicron junction at 4.3 K was found to be 1-3 mV, comparable to the value of the same type of junction in micron scale. The submicron junction Rn value is in the range of 35-100 Ω, resulting a large SQUID modulation voltage in a wide temperature range. This performance promotes further investigation of cryogen-free, high field sensitivity SQUID applications at medium low temperature, e.g. at 40-60 K.

  4. Performance of the Italian version of the subjective well-being under neuroleptic (SWN) scale in schizophrenic outpatients.

    PubMed

    Balestrieri, M; Giaroli, G; Mazzi, M; Bellantuono, C

    2006-05-01

    Several studies indicate that subjective experience toward antipsychotic drugs (APs) in schizophrenic patients is a key factor in ensuring a smooth recovery from the illness. The principal aim of this study was to establish the psychometric performance of the Subjective Well-being Under Neuroleptic (SWN) scale in its Italian version and to assess, through the SWN scale, the subjective experience of stabilized psychotic outpatients in maintenance with APs. The original short version of SWN, consisting of 20 items, was back translated, and a focus group was also conducted to better improve the comprehension of the scale. The results showed a good performance of the Italian version of the SWN as documented by the internal consistency (Cronbach's alpha; 0.85). A satisfactory subjective experience was reported in the sample of schizophrenic outpatients interviewed (SWN mean total score: 84.95, SD: 17.5). The performance of the SWN scale in the present study was very similar to that reported by Naber et al. in the original validation study. Large multi-center studies are needed to better establish differences in the subjective experience of schizophrenic patients treated with first- and second-generation APs.

  5. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  6. Molecular diagnosis of Plasmodium ovale by photo-induced electron transfer fluorogenic primers: PET-PCR

    PubMed Central

    Akerele, David; Ljolje, Dragan; Talundzic, Eldin; Udhayakumar, Venkatachalam

    2017-01-01

    Accurate diagnosis of malaria infections continues to be challenging and elusive, especially in the detection of submicroscopic infections. Developing new malaria diagnostic tools that are sensitive enough to detect low-level infections, user friendly, cost effective and capable of performing large scale diagnosis, remains critical. We have designed novel self-quenching photo-induced electron transfer (PET) fluorogenic primers for the detection of P. ovale by real-time PCR. In our study, a total of 173 clinical samples, consisting of different malaria species, were utilized to test this novel PET-PCR primer. The sensitivity and specificity were calculated using nested-PCR as the reference test. The novel primer set demonstrated a sensitivity of 97.5% and a specificity of 99.2% (95% CI 85.2–99.8% and 95.2–99.9% respectively). Furthermore, the limit of detection for P. ovale was found to be 1 parasite/μl. The PET-PCR assay is a new molecular diagnostic tool with comparable performance to other commonly used PCR methods. It is relatively easy to perform, and amiable to large scale malaria surveillance studies and malaria control and elimination programs. Further field validation of this novel primer will be helpful to ascertain the utility for large scale malaria screening programs. PMID:28640824

  7. Molecular diagnosis of Plasmodium ovale by photo-induced electron transfer fluorogenic primers: PET-PCR.

    PubMed

    Akerele, David; Ljolje, Dragan; Talundzic, Eldin; Udhayakumar, Venkatachalam; Lucchi, Naomi W

    2017-01-01

    Accurate diagnosis of malaria infections continues to be challenging and elusive, especially in the detection of submicroscopic infections. Developing new malaria diagnostic tools that are sensitive enough to detect low-level infections, user friendly, cost effective and capable of performing large scale diagnosis, remains critical. We have designed novel self-quenching photo-induced electron transfer (PET) fluorogenic primers for the detection of P. ovale by real-time PCR. In our study, a total of 173 clinical samples, consisting of different malaria species, were utilized to test this novel PET-PCR primer. The sensitivity and specificity were calculated using nested-PCR as the reference test. The novel primer set demonstrated a sensitivity of 97.5% and a specificity of 99.2% (95% CI 85.2-99.8% and 95.2-99.9% respectively). Furthermore, the limit of detection for P. ovale was found to be 1 parasite/μl. The PET-PCR assay is a new molecular diagnostic tool with comparable performance to other commonly used PCR methods. It is relatively easy to perform, and amiable to large scale malaria surveillance studies and malaria control and elimination programs. Further field validation of this novel primer will be helpful to ascertain the utility for large scale malaria screening programs.

  8. Experiment-scale molecular simulation study of liquid crystal thin films

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac; Carrillo, Jan-Michael Y.; Matheson, Michael A.; Brown, W. Michael

    2014-03-01

    Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena.

  9. The efficacy of cognitive prosthetic technology for people with memory impairments: a systematic review and meta-analysis.

    PubMed

    Jamieson, Matthew; Cullen, Breda; McGee-Lennon, Marilyn; Brewster, Stephen; Evans, Jonathan J

    2014-01-01

    Technology can compensate for memory impairment. The efficacy of assistive technology for people with memory difficulties and the methodology of selected studies are assessed. A systematic search was performed and all studies that investigated the impact of technology on memory performance for adults with impaired memory resulting from acquired brain injury (ABI) or a degenerative disease were included. Two 10-point scales were used to compare each study to an ideally reported single case experimental design (SCED) study (SCED scale; Tate et al., 2008 ) or randomised control group study (PEDro-P scale; Maher, Sherrington, Herbert, Moseley, & Elkins, 2003 ). Thirty-two SCED (mean = 5.9 on the SCED scale) and 11 group studies (mean = 4.45 on the PEDro-P scale) were found. Baseline and intervention performance for each participant in the SCED studies was re-calculated using non-overlap of all pairs (Parker & Vannest, 2009 ) giving a mean score of 0.85 on a 0 to 1 scale (17 studies, n = 36). A meta-analysis of the efficacy of technology vs. control in seven group studies gave a large effect size (d = 1.27) (n = 147). It was concluded that prosthetic technology can improve performance on everyday tasks requiring memory. There is a specific need for investigations of technology for people with degenerative diseases.

  10. Automated Essay Scoring versus Human Scoring: A Correlational Study

    ERIC Educational Resources Information Center

    Wang, Jinhao; Brown, Michelle Stallone

    2008-01-01

    The purpose of the current study was to analyze the relationship between automated essay scoring (AES) and human scoring in order to determine the validity and usefulness of AES for large-scale placement tests. Specifically, a correlational research design was used to examine the correlations between AES performance and human raters' performance.…

  11. Large-Scale Studies on the Transferability of General Problem-Solving Skills and the Pedagogic Potential of Physics

    ERIC Educational Resources Information Center

    Mashood, K. K.; Singh, Vijay A.

    2013-01-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…

  12. Cross-flow turbines: progress report on physical and numerical model studies at large laboratory scale

    NASA Astrophysics Data System (ADS)

    Wosnik, Martin; Bachant, Peter

    2016-11-01

    Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.

  13. Analytical Assessment of the Relationship between 100MWp Large-scale Grid-connected Photovoltaic Plant Performance and Meteorological Parameters

    NASA Astrophysics Data System (ADS)

    Sheng, Jie; Zhu, Qiaoming; Cao, Shijie; You, Yang

    2017-05-01

    This paper helps in study of the relationship between the photovoltaic power generation of large scale “fishing and PV complementary” grid-tied photovoltaic system and meteorological parameters, with multi-time scale power data from the photovoltaic power station and meteorological data over the same period of a whole year. The result indicates that, the PV power generation has the most significant correlation with global solar irradiation, followed by diurnal temperature range, sunshine hours, daily maximum temperature and daily average temperature. In different months, the maximum monthly average power generation appears in August, which related to the more global solar irradiation and longer sunshine hours in this month. However, the maximum daily average power generation appears in October, this is due to the drop in temperature brings about the improvement of the efficiency of PV panels. Through the contrast of monthly average performance ratio (PR) and monthly average temperature, it is shown that, the larger values of monthly average PR appears in April and October, while it is smaller in summer with higher temperature. The results concluded that temperature has a great influence on the performance ratio of large scale grid-tied PV power system, and it is important to adopt effective measures to decrease the temperature of PV plant properly.

  14. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    PubMed

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  15. On the relationship between large-scale climate modes and regional synoptic patterns that drive Victorian rainfall

    NASA Astrophysics Data System (ADS)

    Verdon-Kidd, D.; Kiem, A. S.

    2008-10-01

    In this paper regional (synoptic) and large-scale climate drivers of rainfall are investigated for Victoria, Australia. A non-linear classification methodology known as self-organizing maps (SOM) is used to identify 20 key regional synoptic patterns, which are shown to capture a range of significant synoptic features known to influence the climate of the region. Rainfall distributions are assigned to each of the 20 patterns for nine rainfall stations located across Victoria, resulting in a clear distinction between wet and dry synoptic types at each station. The influence of large-scale climate modes on the frequency and timing of the regional synoptic patterns is also investigated. This analysis revealed that phase changes in the El Niño Southern Oscillation (ENSO), the Southern Annular Mode (SAM) and/or Indian Ocean Dipole (IOD) are associated with a shift in the relative frequency of wet and dry synoptic types. Importantly, these results highlight the potential to utilise the link between the regional synoptic patterns derived in this study and large-scale climate modes to improve rainfall forecasting for Victoria, both in the short- (i.e. seasonal) and long-term (i.e. decadal/multi-decadal scale). In addition, the regional and large-scale climate drivers identified in this study provide a benchmark by which the performance of Global Climate Models (GCMs) may be assessed.

  16. Scale-Up: Improving Large Enrollment Physics Courses

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    1999-11-01

    The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.

  17. Large Scale Cross Drive Correlation Of Digital Media

    DTIC Science & Technology

    2016-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS LARGE SCALE CROSS-DRIVE CORRELATION OF DIGITAL MEDIA by Joseph Van Bruaene March 2016 Thesis Co...CROSS-DRIVE CORRELATION OF DIGITAL MEDIA 5. FUNDING NUMBERS 6. AUTHOR(S) Joseph Van Bruaene 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...the ability to make large scale cross-drive correlations among a large corpus of digital media becomes increasingly important. We propose a

  18. Sex Differences in Arithmetical Performance Scores: Central Tendency and Variability

    ERIC Educational Resources Information Center

    Martens, R.; Hurks, P. P. M.; Meijs, C.; Wassenberg, R.; Jolles, J.

    2011-01-01

    The present study aimed to analyze sex differences in arithmetical performance in a large-scale sample of 390 children (193 boys) frequenting grades 1-9. Past research in this field has focused primarily on average performance, implicitly assuming homogeneity of variance, for which support is scarce. This article examined sex differences in…

  19. Rater Severity in Large-Scale Assessment: Is It Invariant?

    ERIC Educational Resources Information Center

    McQueen, Joy; Congdon, Peter J.

    A study was conducted to investigate the stability of rater severity over an extended rating period. Multifaceted Rasch analysis was applied to ratings of writing performances of 8,285 primary school (elementary) students. Each performance was rated on two performance dimensions by two trained raters over a period of 7 rating days. Performances…

  20. Guided growth of large-scale, horizontally aligned arrays of single-walled carbon nanotubes and their use in thin-film transistors.

    PubMed

    Kocabas, Coskun; Hur, Seung-Hyun; Gaur, Anshu; Meitl, Matthew A; Shim, Moonsub; Rogers, John A

    2005-11-01

    A convenient process for generating large-scale, horizontally aligned arrays of pristine, single-walled carbon nanotubes (SWNTs) is described. The approach uses guided growth, by chemical vapor deposition (CVD), of SWNTs on miscut single-crystal quartz substrates. Studies of the growth reveal important relationships between the density and alignment of the tubes, the CVD conditions, and the morphology of the quartz. Electrodes and dielectrics patterned on top of these arrays yield thin-film transistors that use the SWNTs as effective thin-film semiconductors. The ability to build high-performance devices of this type suggests significant promise for large-scale aligned arrays of SWNTs in electronics, sensors, and other applications.

  1. Perceived Stress, Energy Drink Consumption, and Academic Performance among College Students

    ERIC Educational Resources Information Center

    Pettit, Michele L.; DeBarr, Kathy A.

    2011-01-01

    Objective: This study explored relationships regarding perceived stress, energy drink consumption, and academic performance among college students. Participants: Participants included 136 undergraduates attending a large southern plains university. Methods: Participants completed surveys including items from the Perceived Stress Scale and items to…

  2. The Role of Scheduling in Observing Teacher-Child Interactions

    ERIC Educational Resources Information Center

    Cash, Anne H.; Pianta, Robert C.

    2014-01-01

    Observational assessment is being used on a large scale to evaluate the quality of interactions between teachers and children in classroom environments. When one performs observations at scale, features of the protocol such as the scheduling of observations can potentially influence observed scores. In this study interactions were observed for 88…

  3. The Immigrant's University: A Study of Academic Performance and the Experiences of Recent Immigrant Groups at the University of California

    ERIC Educational Resources Information Center

    Douglass, John Aubrey; Thomson, Gregg

    2010-01-01

    One of the major characteristics of globalization is the large influx of immigrant groups moving largely from underdeveloped regions to developed economies. California offers one of the most robust examples of a large-scale, postmodern demographic transition that includes a great racial, ethnic, and cultural diversity of immigrant groups, many of…

  4. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  5. Performance of lap splices in large-scale column specimens affected by ASR and/or DEF.

    DOT National Transportation Integrated Search

    2012-06-01

    This research program conducted a large experimental program, which consisted of the design, construction, : curing, deterioration, and structural load testing of 16 large-scale column specimens with a critical lap splice : region, and then compared ...

  6. Optical correlator using very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators

    NASA Technical Reports Server (NTRS)

    Turner, Richard M.; Jared, David A.; Sharp, Gary D.; Johnson, Kristina M.

    1993-01-01

    The use of 2-kHz 64 x 64 very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators as the input and filter planes of a VanderLugt-type optical correlator is discussed. Liquid-crystal layer thickness variations that are present in the devices are analyzed, and the effects on correlator performance are investigated through computer simulations. Experimental results from the very-large-scale-integrated / ferroelectric-liquid-crystal optical-correlator system are presented and are consistent with the level of performance predicted by the simulations.

  7. Cross-axis synchronous flow-through coil planet centrifuge for large-scale preparative counter-current chromatography. III. Performance of large-bore coils in slow planetary motion.

    PubMed

    Ito, Y; Zhang, T Y

    1988-11-25

    A preparative capability of the present cross-axis synchronous flow-through coil planet centrifuge was demonstrated with 0.5 cm I.D. multilayer coils. Results of the model studies with short coils indicated that the optimal separations are obtained at low revolutional speeds of 100-200 rpm in both central and lateral coil positions. Preparative separations were successfully performed on 2.5-10 g quantities of test samples in a pair of multilayer coils connected in series with a total capacity of 2.5 l. The sample loading capacity will be scaled up in several folds by increasing the column width.

  8. Limited English Proficiency, Race/Ethnicity and Socio-Economic Status as Influences on Scores in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Terwilliger, James S.; Magnuson, Paul

    2005-01-01

    The purpose of this study was to examine the effects of three basic demographic variables on reading test scores for students in the middle elementary grades. Limited English proficiency (LEP), race/ethnicity and socio-economic status (SES) were studied to determine their influence individually and in combination on performance in large-scale…

  9. The use of data from national and other large-scale user experience surveys in local quality work: a systematic review.

    PubMed

    Haugum, Mona; Danielsen, Kirsten; Iversen, Hilde Hestad; Bjertnaes, Oyvind

    2014-12-01

    An important goal for national and large-scale surveys of user experiences is quality improvement. However, large-scale surveys are normally conducted by a professional external surveyor, creating an institutionalized division between the measurement of user experiences and the quality work that is performed locally. The aim of this study was to identify and describe scientific studies related to the use of national and large-scale surveys of user experiences in local quality work. Ovid EMBASE, Ovid MEDLINE, Ovid PsycINFO and the Cochrane Database of Systematic Reviews. Scientific publications about user experiences and satisfaction about the extent to which data from national and other large-scale user experience surveys are used for local quality work in the health services. Themes of interest were identified and a narrative analysis was undertaken. Thirteen publications were included, all differed substantially in several characteristics. The results show that large-scale surveys of user experiences are used in local quality work. The types of follow-up activity varied considerably from conducting a follow-up analysis of user experience survey data to information sharing and more-systematic efforts to use the data as a basis for improving the quality of care. This review shows that large-scale surveys of user experiences are used in local quality work. However, there is a need for more, better and standardized research in this field. The considerable variation in follow-up activities points to the need for systematic guidance on how to use data in local quality work. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  10. Vibration-based structural health monitoring of the aircraft large component

    NASA Astrophysics Data System (ADS)

    Pavelko, V.; Kuznetsov, S.; Nevsky, A.; Marinbah, M.

    2017-10-01

    In the presented paper there are investigated the basic problems of the local system of SHM of large scale aircraft component. Vibration-based damage detection is accepted as a basic condition, and main attention focused to a low-cost solution that would be attractive for practice. The conditions of small damage detection in the full scale structural component at low-frequency excitation were defined in analytical study and modal FEA. In experimental study the dynamic test of the helicopter Mi-8 tail beam was performed at harmonic excitation with frequency close to first natural frequency of the beam. The index of correlation coefficient deviation (CCD) was used for extraction of the features due to embedded pseudo-damage. It is shown that the problem of vibration-based detection of a small damage in the large scale structure at low-frequency excitation can be solved successfully.

  11. No Country Left Behind: Rhetoric and Reality of International Large-Scale Assessment. William H. Angoff Memorial Lecture Series

    ERIC Educational Resources Information Center

    Feuer, Michael J.

    2011-01-01

    Few arguments about education are as effective at galvanizing public attention and motivating political action as those that compare the performance of students with their counterparts in other countries and that connect academic achievement to economic performance. Because data from international large-scale assessments (ILSA) have a powerful…

  12. RAID-2: Design and implementation of a large scale disk array controller

    NASA Technical Reports Server (NTRS)

    Katz, R. H.; Chen, P. M.; Drapeau, A. L.; Lee, E. K.; Lutz, K.; Miller, E. L.; Seshan, S.; Patterson, D. A.

    1992-01-01

    We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly.

  13. Comparison of fall prediction by the Hessisch Oldendorf Fall Risk Scale and the Fall Risk Scale by Huhn in neurological rehabilitation: an observational study.

    PubMed

    Hermann, Olena; Schmidt, Simone B; Boltzmann, Melanie; Rollnik, Jens D

    2018-05-01

    To calculate scale performance of the newly developed Hessisch Oldendorf Fall Risk Scale (HOSS) for classifying fallers and non-fallers in comparison with the Risk of Falling Scale by Huhn (FSH), a frequently used assessment tool. A prospective observational trail was conducted. The study was performed in a large specialized neurological rehabilitation facility. The study population ( n = 690) included neurological and neurosurgery patients during neurological rehabilitation with varying levels of disability. Around the half of the study patients were independent and dependent in the activities of daily living (ADL), respectively. Fall risk of each patient was assessed by HOSS and FSH within the first seven days after admission. Event of fall during rehabilitation was compared with HOSS and FSH scores as well as the according fall risk. Scale performance including sensitivity and specificity was calculated for both scales. A total of 107 (15.5%) patients experienced at least one fall. In general, fallers were characterized by an older age, a prolonged length of stay, and a lower Barthel Index (higher dependence in the ADL) on admission than non-fallers. The verification of fall prediction for both scales showed a sensitivity of 83% and a specificity of 64% for the HOSS scale, and a sensitivity of 98% with a specificity of 12% for the FSH scale, respectively. The HOSS shows an adequate sensitivity, a higher specificity and therefore a better scale performance than the FSH. Thus, the HOSS might be superior to existing assessments.

  14. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.

    PubMed

    Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail

    2018-04-15

    This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Treesearch

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  16. Novel Miscanthus Germplasm-Based Value Chains: A Life Cycle Assessment

    PubMed Central

    Wagner, Moritz; Kiesel, Andreas; Hastings, Astley; Iqbal, Yasir; Lewandowski, Iris

    2017-01-01

    In recent years, considerable progress has been made in miscanthus research: improvement of management practices, breeding of new genotypes, especially for marginal conditions, and development of novel utilization options. The purpose of the current study was a holistic analysis of the environmental performance of such novel miscanthus-based value chains. In addition, the relevance of the analyzed environmental impact categories was assessed. A Life Cycle Assessment was conducted to analyse the environmental performance of the miscanthus-based value chains in 18 impact categories. In order to include the substitution of a reference product, a system expansion approach was used. In addition, a normalization step was applied. This allowed the relevance of these impact categories to be evaluated for each utilization pathway. The miscanthus was cultivated on six sites in Europe (Aberystwyth, Adana, Moscow, Potash, Stuttgart and Wageningen) and the biomass was utilized in the following six pathways: (1) small-scale combustion (heat)—chips; (2) small-scale combustion (heat)—pellets; (3) large-scale combustion (CHP)—biomass baled for transport and storage; (4) large-scale combustion (CHP)—pellets; (5) medium-scale biogas plant—ensiled miscanthus biomass; and (6) large-scale production of insulation material. Thus, in total, the environmental performance of 36 site × pathway combinations was assessed. The comparatively high normalized results of human toxicity, marine, and freshwater ecotoxicity, and freshwater eutrophication indicate the relevance of these impact categories in the assessment of miscanthus-based value chains. Differences between the six sites can almost entirely be attributed to variations in biomass yield. However, the environmental performance of the utilization pathways analyzed varied widely. The largest differences were shown for freshwater and marine ecotoxicity, and freshwater eutrophication. The production of insulation material had the lowest impact on the environment, with net benefits in all impact categories expect three (marine eutrophication, human toxicity, agricultural land occupation). This performance can be explained by the multiple use of the biomass, first as material and subsequently as an energy carrier, and by the substitution of an emission-intensive reference product. The results of this study emphasize the importance of assessing all environmental impacts when selecting appropriate utilization pathways. PMID:28642784

  17. Review of the Need for a Large-scale Test Facility for Research on the Effects of Extreme Winds on Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. G. Little

    1999-03-01

    The Idaho National Engineering and Environmental Laboratory (INEEL), through the US Department of Energy (DOE), has proposed that a large-scale wind test facility (LSWTF) be constructed to study, in full-scale, the behavior of low-rise structures under simulated extreme wind conditions. To determine the need for, and potential benefits of, such a facility, the Idaho Operations Office of the DOE requested that the National Research Council (NRC) perform an independent assessment of the role and potential value of an LSWTF in the overall context of wind engineering research. The NRC established the Committee to Review the Need for a Large-scale Testmore » Facility for Research on the Effects of Extreme Winds on Structures, under the auspices of the Board on Infrastructure and the Constructed Environment, to perform this assessment. This report conveys the results of the committee's deliberations as well as its findings and recommendations. Data developed at large-scale would enhanced the understanding of how structures, particularly light-frame structures, are affected by extreme winds (e.g., hurricanes, tornadoes, sever thunderstorms, and other events). With a large-scale wind test facility, full-sized structures, such as site-built or manufactured housing and small commercial or industrial buildings, could be tested under a range of wind conditions in a controlled, repeatable environment. At this time, the US has no facility specifically constructed for this purpose. During the course of this study, the committee was confronted by three difficult questions: (1) does the lack of a facility equate to a need for the facility? (2) is need alone sufficient justification for the construction of a facility? and (3) would the benefits derived from information produced in an LSWTF justify the costs of producing that information? The committee's evaluation of the need and justification for an LSWTF was shaped by these realities.« less

  18. Keeping on Track: Performance Profiles of Low Performers in Academic Educational Tracks

    ERIC Educational Resources Information Center

    Reed, Helen C.; van Wesel, Floryt; Ouwehand, Carolijn; Jolles, Jelle

    2015-01-01

    In countries with high differentiation between academic and vocational education, an individual's future prospects are strongly determined by the educational track to which he or she is assigned. This large-scale, cross-sectional study focuses on low-performing students in academic tracks who face being moved to a vocational track. If more is…

  19. Experimental and analytical investigations to improve low-speed performance and stability and control characteristics of supersonic cruise fighter vehicles

    NASA Technical Reports Server (NTRS)

    Graham, A. B.

    1977-01-01

    Small- and large-scale models of supersonic cruise fighter vehicles were used to determine the effectiveness of airframe/propulsion integration concepts for improved low-speed performance and stability and control characteristics. Computer programs were used for engine/airframe sizing studies to yield optimum vehicle performance.

  20. Performance of lap splices in large-scale column specimens affected by ASR and/or DEF-extension phase.

    DOT National Transportation Integrated Search

    2015-03-01

    A large experimental program, consisting of the design, construction, curing, exposure, and structural load : testing of 16 large-scale column specimens with a critical lap splice region that were influenced by varying : stages of alkali-silica react...

  1. Scaling earthquake ground motions for performance-based assessment of buildings

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.

    2011-01-01

    The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.

  2. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  3. Experimental study of detonation of large-scale powder-droplet-vapor mixtures

    NASA Astrophysics Data System (ADS)

    Bai, C.-H.; Wang, Y.; Xue, K.; Wang, L.-F.

    2018-05-01

    Large-scale experiments were carried out to investigate the detonation performance of a 1600-m3 ternary cloud consisting of aluminum powder, fuel droplets, and vapor, which were dispersed by a central explosive in a cylindrically stratified configuration. High-frame-rate video cameras and pressure gauges were used to analyze the large-scale explosive dispersal of the mixture and the ensuing blast wave generated by the detonation of the cloud. Special attention was focused on the effect of the descending motion of the charge on the detonation performance of the dispersed ternary cloud. The charge was parachuted by an ensemble of apparatus from the designated height in order to achieve the required terminal velocity when the central explosive was detonated. A descending charge with a terminal velocity of 32 m/s produced a cloud with discernably increased concentration compared with that dispersed from a stationary charge, the detonation of which hence generates a significantly enhanced blast wave beyond the scaled distance of 6 m/kg^{1/3}. The results also show the influence of the descending motion of the charge on the jetting phenomenon and the distorted shock front.

  4. Bridging the scales in atmospheric composition simulations using a nudging technique

    NASA Astrophysics Data System (ADS)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.

  5. Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing

    NASA Astrophysics Data System (ADS)

    Colombo, Matteo

    2017-03-01

    Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence.

  6. Large Scale GW Calculations on the Cori System

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  7. Global-scale regionalization of hydrological model parameters using streamflow data from many small catchments

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard

    2015-04-01

    Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.

  8. Heavy hydrocarbon main injector technology

    NASA Technical Reports Server (NTRS)

    Fisher, S. C.; Arbit, H. A.

    1988-01-01

    One of the key components of the Advanced Launch System (ALS) is a large liquid rocket, booster engine. To keep the overall vehicle size and cost down, this engine will probably use liquid oxygen (LOX) and a heavy hydrocarbon, such as RP-1, as propellants and operate at relatively high chamber pressures to increase overall performance. A technology program (Heavy Hydrocarbon Main Injector Technology) is being studied. The main objective of this effort is to develop a logic plan and supporting experimental data base to reduce the risk of developing a large scale (approximately 750,000 lb thrust), high performance main injector system. The overall approach and program plan, from initial analyses to large scale, two dimensional combustor design and test, and the current status of the program are discussed. Progress includes performance and stability analyses, cold flow tests of injector model, design and fabrication of subscale injectors and calorimeter combustors for performance, heat transfer, and dynamic stability tests, and preparation of hot fire test plans. Related, current, high pressure, LOX/RP-1 injector technology efforts are also briefly discussed.

  9. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  10. Botswana water and surface energy balance research program. Part 2: Large scale moisture and passive microwaves

    NASA Technical Reports Server (NTRS)

    Vandegriend, A. A.; Owe, M.; Chang, A. T. C.

    1992-01-01

    The Botswana water and surface energy balance research program was developed to study and evaluate the integrated use of multispectral satellite remote sensing for monitoring the hydrological status of the Earth's surface. The research program consisted of two major, mutually related components: a surface energy balance modeling component, built around an extensive field campaign; and a passive microwave research component which consisted of a retrospective study of large scale moisture conditions and Nimbus scanning multichannel microwave radiometer microwave signatures. The integrated approach of both components are explained in general and activities performed within the passive microwave research component are summarized. The microwave theory is discussed taking into account: soil dielectric constant, emissivity, soil roughness effects, vegetation effects, optical depth, single scattering albedo, and wavelength effects. The study site is described. The soil moisture data and its processing are considered. The relation between observed large scale soil moisture and normalized brightness temperatures is discussed. Vegetation characteristics and inverse modeling of soil emissivity is considered.

  11. Study of multi-functional precision optical measuring system for large scale equipment

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  12. Performance of the first Japanese large-scale facility for radon inhalation experiments with small animals.

    PubMed

    Ishimori, Yuu; Mitsunobu, Fumihiro; Yamaoka, Kiyonori; Tanaka, Hiroshi; Kataoka, Takahiro; Sakoda, Akihiro

    2011-07-01

    A radon test facility for small animals was developed in order to increase the statistical validity of differences of the biological response in various radon environments. This paper illustrates the performances of that facility, the first large-scale facility of its kind in Japan. The facility has a capability to conduct approximately 150 mouse-scale tests at the same time. The apparatus for exposing small animals to radon has six animal chamber groups with five independent cages each. Different radon concentrations in each animal chamber group are available. Because the first target of this study is to examine the in vivo behaviour of radon and its effects, the major functions to control radon and to eliminate thoron were examined experimentally. Additionally, radon progeny concentrations and their particle size distributions in the cages were also examined experimentally to be considered in future projects.

  13. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    PubMed

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  14. Translational bioinformatics in the cloud: an affordable alternative

    PubMed Central

    2010-01-01

    With the continued exponential expansion of publicly available genomic data and access to low-cost, high-throughput molecular technologies for profiling patient populations, computational technologies and informatics are becoming vital considerations in genomic medicine. Although cloud computing technology is being heralded as a key enabling technology for the future of genomic research, available case studies are limited to applications in the domain of high-throughput sequence data analysis. The goal of this study was to evaluate the computational and economic characteristics of cloud computing in performing a large-scale data integration and analysis representative of research problems in genomic medicine. We find that the cloud-based analysis compares favorably in both performance and cost in comparison to a local computational cluster, suggesting that cloud computing technologies might be a viable resource for facilitating large-scale translational research in genomic medicine. PMID:20691073

  15. A laser-sheet flow visualization technique for the large wind tunnels of the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Reinath, M. S.; Ross, J. C.

    1990-01-01

    A flow visualization technique for the large wind tunnels of the National Full Scale Aerodynamics Complex (NFAC) is described. The technique uses a laser sheet generated by the NFAC Long Range Laser Velocimeter (LRLV) to illuminate a smoke-like tracer in the flow. The LRLV optical system is modified slightly, and a scanned mirror is added to generate the sheet. These modifications are described, in addition to the results of an initial performance test conducted in the 80- by 120-Foot Wind Tunnel. During this test, flow visualization was performed in the wake region behind a truck as part of a vehicle drag reduction study. The problems encountered during the test are discussed, in addition to the recommended improvements needed to enhance the performance of the technique for future applications.

  16. Studying time of flight imaging through scattering media across multiple size scales (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Velten, Andreas

    2017-05-01

    Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.

  17. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  18. Allometry indicates giant eyes of giant squid are not exceptional.

    PubMed

    Schmitz, Lars; Motani, Ryosuke; Oufiero, Christopher E; Martin, Christopher H; McGee, Matthew D; Gamarra, Ashlee R; Lee, Johanna J; Wainwright, Peter C

    2013-02-18

    The eyes of giant and colossal squid are among the largest eyes in the history of life. It was recently proposed that sperm whale predation is the main driver of eye size evolution in giant squid, on the basis of an optical model that suggested optimal performance in detecting large luminous visual targets such as whales in the deep sea. However, it is poorly understood how the eye size of giant and colossal squid compares to that of other aquatic organisms when scaling effects are considered. We performed a large-scale comparative study that included 87 squid species and 237 species of acanthomorph fish. While squid have larger eyes than most acanthomorphs, a comparison of relative eye size among squid suggests that giant and colossal squid do not have unusually large eyes. After revising constants used in a previous model we found that large eyes perform equally well in detecting point targets and large luminous targets in the deep sea. The eyes of giant and colossal squid do not appear exceptionally large when allometric effects are considered. It is probable that the giant eyes of giant squid result from a phylogenetically conserved developmental pattern manifested in very large animals. Whatever the cause of large eyes, they appear to have several advantages for vision in the reduced light of the deep mesopelagic zone.

  19. Vapor and healing treatment for CH3NH3PbI3-xClx films toward large-area perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Gouda, Laxman; Gottesman, Ronen; Tirosh, Shay; Haltzi, Eynav; Hu, Jiangang; Ginsburg, Adam; Keller, David A.; Bouhadana, Yaniv; Zaban, Arie

    2016-03-01

    Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method.Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr08658b

  20. Educational Objectives and Academic Performance in Higher Education. Volume 1. Prediction of Academic Performance. Volume 2. Aims and Objectives.

    ERIC Educational Resources Information Center

    Entwistle, Noel J.; And Others

    1977-01-01

    Volume 1 discusses a large-scale follow-up study of the correlates of academic success in 2,595 college students. Volume 2 presents a philosophical and historical approach to studying educational objectives. (Available in microfiche from: Carfax Publishing Company, Haddon House, Dorchester-on-Thames, Oxford 0X9 8JZ, England.) (CP)

  1. Computational Typologies of Multidimensional End-of-Primary-School Performance Profiles from an Educational Perspective of Large-Scale TIMSS and PIRLS Surveys

    ERIC Educational Resources Information Center

    Unlu, Ali; Schurig, Michael

    2015-01-01

    Recently, performance profiles in reading, mathematics and science were created using the data collectively available in the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS) 2011. In addition, a classification of children to the end of their primary school years was…

  2. Edge-localized mode avoidance and pedestal structure in I-mode plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walk, J. R., E-mail: jrwalk@psfc.mit.edu; Hughes, J. W.; Hubbard, A. E.

    I-mode is a high-performance tokamak regime characterized by the formation of a temperature pedestal and enhanced energy confinement, without an accompanying density pedestal or drop in particle and impurity transport. I-mode operation appears to have naturally occurring suppression of large Edge-Localized Modes (ELMs) in addition to its highly favorable scalings of pedestal structure and overall performance. Extensive study of the ELMy H-mode has led to the development of the EPED model, which utilizes calculations of coupled peeling-ballooning MHD modes and kinetic-ballooning mode (KBM) stability limits to predict the pedestal structure preceding an ELM crash. We apply similar tools to themore » structure and ELM stability of I-mode pedestals. Analysis of I-mode discharges prepared with high-resolution pedestal data from the most recent C-Mod campaign reveals favorable pedestal scalings for extrapolation to large machines—pedestal temperature scales strongly with power per particle P{sub net}/n{sup ¯}{sub e}, and likewise pedestal pressure scales as the net heating power (consistent with weak degradation of confinement with heating power). Matched discharges in current, field, and shaping demonstrate the decoupling of energy and particle transport in I-mode, increasing fueling to span nearly a factor of two in density while maintaining matched temperature pedestals with consistent levels of P{sub net}/n{sup ¯}{sub e}. This is consistent with targets for increased performance in I-mode, elevating pedestal β{sub p} and global performance with matched increases in density and heating power. MHD calculations using the ELITE code indicate that I-mode pedestals are strongly stable to edge peeling-ballooning instabilities. Likewise, numerical modeling of the KBM turbulence onset, as well as scalings of the pedestal width with poloidal beta, indicates that I-mode pedestals are not limited by KBM turbulence—both features identified with the trigger for large ELMs, consistent with the observed suppression of large ELMs in I-mode.« less

  3. Edge-localized mode avoidance and pedestal structure in I-mode plasmasa)

    NASA Astrophysics Data System (ADS)

    Walk, J. R.; Hughes, J. W.; Hubbard, A. E.; Terry, J. L.; Whyte, D. G.; White, A. E.; Baek, S. G.; Reinke, M. L.; Theiler, C.; Churchill, R. M.; Rice, J. E.; Snyder, P. B.; Osborne, T.; Dominguez, A.; Cziegler, I.

    2014-05-01

    I-mode is a high-performance tokamak regime characterized by the formation of a temperature pedestal and enhanced energy confinement, without an accompanying density pedestal or drop in particle and impurity transport. I-mode operation appears to have naturally occurring suppression of large Edge-Localized Modes (ELMs) in addition to its highly favorable scalings of pedestal structure and overall performance. Extensive study of the ELMy H-mode has led to the development of the EPED model, which utilizes calculations of coupled peeling-ballooning MHD modes and kinetic-ballooning mode (KBM) stability limits to predict the pedestal structure preceding an ELM crash. We apply similar tools to the structure and ELM stability of I-mode pedestals. Analysis of I-mode discharges prepared with high-resolution pedestal data from the most recent C-Mod campaign reveals favorable pedestal scalings for extrapolation to large machines—pedestal temperature scales strongly with power per particle Pnet/n ¯e, and likewise pedestal pressure scales as the net heating power (consistent with weak degradation of confinement with heating power). Matched discharges in current, field, and shaping demonstrate the decoupling of energy and particle transport in I-mode, increasing fueling to span nearly a factor of two in density while maintaining matched temperature pedestals with consistent levels of Pnet/n ¯e. This is consistent with targets for increased performance in I-mode, elevating pedestal βp and global performance with matched increases in density and heating power. MHD calculations using the ELITE code indicate that I-mode pedestals are strongly stable to edge peeling-ballooning instabilities. Likewise, numerical modeling of the KBM turbulence onset, as well as scalings of the pedestal width with poloidal beta, indicates that I-mode pedestals are not limited by KBM turbulence—both features identified with the trigger for large ELMs, consistent with the observed suppression of large ELMs in I-mode.

  4. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  5. Leading Educational Change and Improvement at Scale: Some Inconvenient Truths about System Performance

    ERIC Educational Resources Information Center

    Harris, Alma; Jones, Michelle

    2017-01-01

    The challenges of securing educational change and transformation, at scale, remain considerable. While sustained progress has been made in some education systems (Fullan, 2009; Hargreaves & Shirley, 2009) generally, it remains the case that the pathway to large-scale, system improvement is far from easy or straightforward. While large-scale…

  6. Using HLM to Explore the Effects of Perceptions of Learning Environments and Assessments on Students' Test Performance

    ERIC Educational Resources Information Center

    Chu, Man-Wai; Babenko, Oksana; Cui, Ying; Leighton, Jacqueline P.

    2014-01-01

    The study examines the role that perceptions or impressions of learning environments and assessments play in students' performance on a large-scale standardized test. Hierarchical linear modeling (HLM) was used to test aspects of the Learning Errors and Formative Feedback model to determine how much variation in students' performance was explained…

  7. Reading Fluency as a Predictor of Reading Proficiency in Low-Performing, High-Poverty Schools

    ERIC Educational Resources Information Center

    Baker, Scott K.; Smolkowski, Keith; Katz, Rachell; Fien, Hank; Seeley, John R.; Kame'enui, Edward J.; Beck, Carrie Thomas

    2008-01-01

    The purpose of this study was to examine oral reading fluency (ORF) in the context of a large-scale federal reading initiative conducted in low performing, high poverty schools. The objectives were to (a) investigate the relation between ORF and comprehensive reading tests, (b) examine whether slope of performance over time on ORF predicted…

  8. Van der Waals epitaxial growth and optoelectronics of large-scale WSe2/SnS2 vertical bilayer p-n junctions.

    PubMed

    Yang, Tiefeng; Zheng, Biyuan; Wang, Zhen; Xu, Tao; Pan, Chen; Zou, Juan; Zhang, Xuehong; Qi, Zhaoyang; Liu, Hongjun; Feng, Yexin; Hu, Weida; Miao, Feng; Sun, Litao; Duan, Xiangfeng; Pan, Anlian

    2017-12-04

    High-quality two-dimensional atomic layered p-n heterostructures are essential for high-performance integrated optoelectronics. The studies to date have been largely limited to exfoliated and restacked flakes, and the controlled growth of such heterostructures remains a significant challenge. Here we report the direct van der Waals epitaxial growth of large-scale WSe 2 /SnS 2 vertical bilayer p-n junctions on SiO 2 /Si substrates, with the lateral sizes reaching up to millimeter scale. Multi-electrode field-effect transistors have been integrated on a single heterostructure bilayer. Electrical transport measurements indicate that the field-effect transistors of the junction show an ultra-low off-state leakage current of 10 -14 A and a highest on-off ratio of up to 10 7 . Optoelectronic characterizations show prominent photoresponse, with a fast response time of 500 μs, faster than all the directly grown vertical 2D heterostructures. The direct growth of high-quality van der Waals junctions marks an important step toward high-performance integrated optoelectronic devices and systems.

  9. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    NASA Astrophysics Data System (ADS)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  10. Gravity versus radiation models: on the importance of scale and heterogeneity in commuting flows.

    PubMed

    Masucci, A Paolo; Serras, Joan; Johansson, Anders; Batty, Michael

    2013-08-01

    We test the recently introduced radiation model against the gravity model for the system composed of England and Wales, both for commuting patterns and for public transportation flows. The analysis is performed both at macroscopic scales, i.e., at the national scale, and at microscopic scales, i.e., at the city level. It is shown that the thermodynamic limit assumption for the original radiation model significantly underestimates the commuting flows for large cities. We then generalize the radiation model, introducing the correct normalization factor for finite systems. We show that even if the gravity model has a better overall performance the parameter-free radiation model gives competitive results, especially for large scales.

  11. Small-scale test program to develop a more efficient swivel nozzle thrust deflector for V/STOL lift/cruise engines

    NASA Technical Reports Server (NTRS)

    Schlundt, D. W.

    1976-01-01

    The installed performance degradation of a swivel nozzle thrust deflector system obtained during increased vectoring angles of a large-scale test program was investigated and improved. Small-scale models were used to generate performance data for analyzing selected swivel nozzle configurations. A single-swivel nozzle design model with five different nozzle configurations and a twin-swivel nozzle design model, scaled to 0.15 size of the large-scale test hardware, were statically tested at low exhaust pressure ratios of 1.4, 1.3, 1.2, and 1.1 and vectored at four nozzle positions from 0 deg cruise through 90 deg vertical used for the VTOL mode.

  12. Providing Test Performance Feedback That Bridges Assessment and Instruction: The Case of Two Standardized English Language Tests in Japan

    ERIC Educational Resources Information Center

    Sawaki, Yasuyo; Koizumi, Rie

    2017-01-01

    This small-scale qualitative study considers feedback and results reported for two major large-scale English language tests administered in Japan: the Global Test of English Communication for Students (GTECfS) and the Eiken Test in Practical English Proficiency (Eiken). Specifically, it examines current score-reporting practices in student and…

  13. Performance evaluation of the bioreactor landfill in treatment and stabilisation of mechanically biologically treated municipal solid waste.

    PubMed

    Lakshmikanthan, P; Sivakumar Babu, G L

    2017-03-01

    The potential of bioreactor landfills to treat mechanically biologically treated municipal solid waste is analysed in this study. Developing countries like India and China have begun to investigate bioreactor landfills for municipal solid waste management. This article describes the impacts of leachate recirculation on waste stabilisation, landfill gas generation, leachate characteristics and long-term waste settlement. A small-scale and large-scale anaerobic cell were filled with mechanically biologically treated municipal solid waste collected from a landfill site at the outskirts of Bangalore, India. Leachate collected from the same landfill site was recirculated at the rate of 2-5 times a month on a regular basis for 370 days. The total quantity of gas generated was around 416 L in the large-scale reactor and 21 L in the small-scale reactor, respectively. Differential settlements ranging from 20%-26% were observed at two different locations in the large reactor, whereas 30% of settlement was observed in the small reactor. The biological oxygen demand/chemical oxygen demand (COD) ratio indicated that the waste in the large reactor was stabilised at the end of 1 year. The performance of the bioreactor with respect to the reactor size, temperature, landfill gas and leachate quality was analysed and it was found that the bioreactor landfill is efficient in the treatment and stabilising of mechanically biologically treated municipal solid waste.

  14. paraGSEA: a scalable approach for large-scale gene expression profiling

    PubMed Central

    Peng, Shaoliang; Yang, Shunyun

    2017-01-01

    Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463

  15. Channel optimization of high-intensity laser beams in millimeter-scale plasmas.

    PubMed

    Ceurvorst, L; Savin, A; Ratan, N; Kasim, M F; Sadler, J; Norreys, P A; Habara, H; Tanaka, K A; Zhang, S; Wei, M S; Ivancic, S; Froula, D H; Theobald, W

    2018-04-01

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>10^{18}W/cm^{2}) kilojoule laser pulses through large density scale length (∼390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.

  16. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    NASA Astrophysics Data System (ADS)

    Ceurvorst, L.; Savin, A.; Ratan, N.; Kasim, M. F.; Sadler, J.; Norreys, P. A.; Habara, H.; Tanaka, K. A.; Zhang, S.; Wei, M. S.; Ivancic, S.; Froula, D. H.; Theobald, W.

    2018-04-01

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>1018W/cm 2 ) kilojoule laser pulses through large density scale length (˜390 -570 μ m ) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.

  17. SIGN SINGULARITY AND FLARES IN SOLAR ACTIVE REGION NOAA 11158

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorriso-Valvo, L.; De Vita, G.; Kazachenko, M. D.

    Solar Active Region NOAA 11158 has hosted a number of strong flares, including one X2.2 event. The complexity of current density and current helicity are studied through cancellation analysis of their sign-singular measure, which features power-law scaling. Spectral analysis is also performed, revealing the presence of two separate scaling ranges with different spectral index. The time evolution of parameters is discussed. Sudden changes of the cancellation exponents at the time of large flares and the presence of correlation with Extreme-Ultra-Violet and X-ray flux suggest that eruption of large flares can be linked to the small-scale properties of the current structures.

  18. Parallel Large-Scale Molecular Dynamics Simulation Opens New Perspective to Clarify the Effect of a Porous Structure on the Sintering Process of Ni/YSZ Multiparticles.

    PubMed

    Xu, Jingxiang; Higuchi, Yuji; Ozawa, Nobuki; Sato, Kazuhisa; Hashida, Toshiyuki; Kubo, Momoji

    2017-09-20

    Ni sintering in the Ni/YSZ porous anode of a solid oxide fuel cell changes the porous structure, leading to degradation. Preventing sintering and degradation during operation is a great challenge. Usually, a sintering molecular dynamics (MD) simulation model consisting of two particles on a substrate is used; however, the model cannot reflect the porous structure effect on sintering. In our previous study, a multi-nanoparticle sintering modeling method with tens of thousands of atoms revealed the effect of the particle framework and porosity on sintering. However, the method cannot reveal the effect of the particle size on sintering and the effect of sintering on the change in the porous structure. In the present study, we report a strategy to reveal them in the porous structure by using our multi-nanoparticle modeling method and a parallel large-scale multimillion-atom MD simulator. We used this method to investigate the effect of YSZ particle size and tortuosity on sintering and degradation in the Ni/YSZ anodes. Our parallel large-scale MD simulation showed that the sintering degree decreased as the YSZ particle size decreased. The gas fuel diffusion path, which reflects the overpotential, was blocked by pore coalescence during sintering. The degradation of gas diffusion performance increased as the YSZ particle size increased. Furthermore, the gas diffusion performance was quantified by a tortuosity parameter and an optimal YSZ particle size, which is equal to that of Ni, was found for good diffusion after sintering. These findings cannot be obtained by previous MD sintering studies with tens of thousands of atoms. The present parallel large-scale multimillion-atom MD simulation makes it possible to clarify the effects of the particle size and tortuosity on sintering and degradation.

  19. Impact of Utility-Scale Distributed Wind on Transmission-Level System Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brancucci Martinez-Anido, C.; Hodge, B. M.

    2014-09-01

    This report presents a new renewable integration study that aims to assess the potential for adding distributed wind to the current power system with minimal or no upgrades to the distribution or transmission electricity systems. It investigates the impacts of integrating large amounts of utility-scale distributed wind power on bulk system operations by performing a case study on the power system of the Independent System Operator-New England (ISO-NE).

  20. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    PubMed

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  1. Large-scale performance evaluation of Accu-Chek inform II point-of-care glucose meters.

    PubMed

    Jeong, Tae-Dong; Cho, Eun-Jung; Ko, Dae-Hyun; Lee, Woochang; Chun, Sail; Hong, Ki-Sook; Min, Won-Ki

    2016-12-01

    The aim of this study was to report the experience of large-scale performance evaluation of 238 Accu-Chek Inform II point-of-care (POC) glucose meters in a single medical setting. The repeatability of 238 POC devices, the within-site imprecision of 12 devices, and the linearity of 49 devices were evaluated using glucose control solutions. The glucose results of 24 POC devices and central laboratory were compared using patient samples. Mean concentration of control solutions was 2.39 mmol/L for Level 1 and 16.52 mmol/L for Level 2. The pooled repeatability coefficient of variation (CV) of the 238 devices was 2.0% for Level 1 and 1.6% for Level 2. The pooled within-site imprecision CV and reproducibility CV of the 12 devices were 2.7% and 2.7% for Level 1, and 1.9%, and 1.9% for Level 2, respectively. The test results of all 49 devices were linear within analytical measurement range from 1.55-31.02 mmol/L. The correlation coefficient for individual POC devices ranged from 0.9967-0.9985. The total correlation coefficient for the 24 devices was 0.998. The Accu-Chek Inform II POC blood glucose meters performed well in terms of precision, linearity, and correlation evaluations. Consensus guidelines for the large-scale performance evaluations of POC devices are required.

  2. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  3. Demonstrating a new framework for the comparison of environmental impacts from small- and large-scale hydropower and wind power projects.

    PubMed

    Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi

    2014-07-01

    Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Large-scale synthesis of high-quality hexagonal boron nitride nanosheets for large-area graphene electronics.

    PubMed

    Lee, Kang Hyuck; Shin, Hyeon-Jin; Lee, Jinyeong; Lee, In-yeal; Kim, Gil-Ho; Choi, Jae-Young; Kim, Sang-Woo

    2012-02-08

    Hexagonal boron nitride (h-BN) has received a great deal of attention as a substrate material for high-performance graphene electronics because it has an atomically smooth surface, lattice constant similar to that of graphene, large optical phonon modes, and a large electrical band gap. Herein, we report the large-scale synthesis of high-quality h-BN nanosheets in a chemical vapor deposition (CVD) process by controlling the surface morphologies of the copper (Cu) catalysts. It was found that morphology control of the Cu foil is much critical for the formation of the pure h-BN nanosheets as well as the improvement of their crystallinity. For the first time, we demonstrate the performance enhancement of CVD-based graphene devices with large-scale h-BN nanosheets. The mobility of the graphene device on the h-BN nanosheets was increased 3 times compared to that without the h-BN nanosheets. The on-off ratio of the drain current is 2 times higher than that of the graphene device without h-BN. This work suggests that high-quality h-BN nanosheets based on CVD are very promising for high-performance large-area graphene electronics. © 2012 American Chemical Society

  5. Determining the Influence of Groundwater Composition on the Performance of Arsenic Adsorption Columns Using Rapid Small-Scale Column Tests

    NASA Astrophysics Data System (ADS)

    Aragon, A. R.; Siegel, M.

    2004-12-01

    The USEPA has established a more stringent drinking water standard for arsenic, reducing the maximum contaminant level (MCL) from 50 μ g/L to 10 μ g/L. This will affect many small communities in the US that lack the appropriate treatment infrastructure and funding to reduce arsenic to such levels. For such communities, adsorption systems are the preferred technology based on ease of operation and relatively lower costs. The performance of adsorption media for the removal of arsenic from drinking water is dependent on site-specific water quality. At certain concentrations, co-occurring solutes will compete effectively with arsenic for sorption sites, potentially reducing the sorption capacity of the media. Due to the site-specific nature of water quality and variations in media properties, pilot scale studies are typically carried out to ensure that a proposed treatment technique is cost effective before installation of a full-scale system. Sandia National Laboratories is currently developing an approach to utilize rapid small-scale columns in lieu of pilot columns to test innovative technologies that could significantly reduce the cost of treatment in small communities. Rapid small-scale column tests (RSSCTs) were developed to predict full-scale treatment of organic contaminants by adsorption onto granular activated carbon (GAC). This process greatly reduced the time and costs required to verify performance of GAC adsorption columns. In this study, the RSSCT methodology is used to predict the removal of inorganic arsenic using mixed metal oxyhydroxide adsorption media. The media are engineered and synthesized from materials that control arsenic behavior in natural and disturbed systems. We describe the underlying theory and application of RSSCTs for the performance evaluation of novel media in several groundwater compositions. Results of small-scale laboratory columns are being used to predict the performance of pilot-scale systems and ultimately to design full-scale systems. RSSCTs will be performed on a suite of water compositions representing the variety of water supplies in the United States that are affected by the new drinking water standard. Ultimately, this approach will be used to carry out inexpensive short-term pilot studies at a large number of sites where large-scale pilots are not economically feasible. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  6. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  7. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  8. The impact of new forms of large-scale general practice provider collaborations on England's NHS: a systematic review.

    PubMed

    Pettigrew, Luisa M; Kumpunen, Stephanie; Mays, Nicholas; Rosen, Rebecca; Posaner, Rachel

    2018-03-01

    Over the past decade, collaboration between general practices in England to form new provider networks and large-scale organisations has been driven largely by grassroots action among GPs. However, it is now being increasingly advocated for by national policymakers. Expectations of what scaling up general practice in England will achieve are significant. To review the evidence of the impact of new forms of large-scale general practice provider collaborations in England. Systematic review. Embase, MEDLINE, Health Management Information Consortium, and Social Sciences Citation Index were searched for studies reporting the impact on clinical processes and outcomes, patient experience, workforce satisfaction, or costs of new forms of provider collaborations between general practices in England. A total of 1782 publications were screened. Five studies met the inclusion criteria and four examined the same general practice networks, limiting generalisability. Substantial financial investment was required to establish the networks and the associated interventions that were targeted at four clinical areas. Quality improvements were achieved through standardised processes, incentives at network level, information technology-enabled performance dashboards, and local network management. The fifth study of a large-scale multisite general practice organisation showed that it may be better placed to implement safety and quality processes than conventional practices. However, unintended consequences may arise, such as perceptions of disenfranchisement among staff and reductions in continuity of care. Good-quality evidence of the impacts of scaling up general practice provider organisations in England is scarce. As more general practice collaborations emerge, evaluation of their impacts will be important to understand which work, in which settings, how, and why. © British Journal of General Practice 2018.

  9. Patterns of resting state connectivity in human primary visual cortical areas: a 7T fMRI study.

    PubMed

    Raemaekers, Mathijs; Schellekens, Wouter; van Wezel, Richard J A; Petridou, Natalia; Kristo, Gert; Ramsey, Nick F

    2014-01-01

    The nature and origin of fMRI resting state fluctuations and connectivity are still not fully known. More detailed knowledge on the relationship between resting state patterns and brain function may help to elucidate this matter. We therefore performed an in depth study of how resting state fluctuations map to the well known architecture of the visual system. We investigated resting state connectivity at both a fine and large scale within and across visual areas V1, V2 and V3 in ten human subjects using a 7Tesla scanner. We found evidence for several coexisting and overlapping connectivity structures at different spatial scales. At the fine-scale level we found enhanced connectivity between the same topographic locations in the fieldmaps of V1, V2 and V3, enhanced connectivity to the contralateral functional homologue, and to a lesser extent enhanced connectivity between iso-eccentric locations within the same visual area. However, by far the largest proportion of the resting state fluctuations occurred within large-scale bilateral networks. These large-scale networks mapped to some extent onto the architecture of the visual system and could thereby obscure fine-scale connectivity. In fact, most of the fine-scale connectivity only became apparent after the large-scale network fluctuations were filtered from the timeseries. We conclude that fMRI resting state fluctuations in the visual cortex may in fact be a composite signal of different overlapping sources. Isolating the different sources could enhance correlations between BOLD and electrophysiological correlates of resting state activity. © 2013 Elsevier Inc. All rights reserved.

  10. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  11. A High-Performance Sintered Iron Electrode for Rechargeable Alkaline Batteries to Enable Large-Scale Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.

    Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less

  12. A High-Performance Sintered Iron Electrode for Rechargeable Alkaline Batteries to Enable Large-Scale Energy Storage

    DOE PAGES

    Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.

    2017-01-07

    Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less

  13. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  14. Large-scale coherent structures of suspended dust concentration in the neutral atmospheric surface layer: A large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Zhang, Yangyue; Hu, Ruifeng; Zheng, Xiaojing

    2018-04-01

    Dust particles can remain suspended in the atmospheric boundary layer, motions of which are primarily determined by turbulent diffusion and gravitational settling. Little is known about the spatial organizations of suspended dust concentration and how turbulent coherent motions contribute to the vertical transport of dust particles. Numerous studies in recent years have revealed that large- and very-large-scale motions in the logarithmic region of laboratory-scale turbulent boundary layers also exist in the high Reynolds number atmospheric boundary layer, but their influence on dust transport is still unclear. In this study, numerical simulations of dust transport in a neutral atmospheric boundary layer based on an Eulerian modeling approach and large-eddy simulation technique are performed to investigate the coherent structures of dust concentration. The instantaneous fields confirm the existence of very long meandering streaks of dust concentration, with alternating high- and low-concentration regions. A strong negative correlation between the streamwise velocity and concentration and a mild positive correlation between the vertical velocity and concentration are observed. The spatial length scales and inclination angles of concentration structures are determined, compared with their flow counterparts. The conditionally averaged fields vividly depict that high- and low-concentration events are accompanied by a pair of counter-rotating quasi-streamwise vortices, with a downwash inside the low-concentration region and an upwash inside the high-concentration region. Through the quadrant analysis, it is indicated that the vertical dust transport is closely related to the large-scale roll modes, and ejections in high-concentration regions are the major mechanisms for the upward motions of dust particles.

  15. Evaluation of Large-scale Data to Detect Irregularity in Payment for Medical Services. An Extended Use of Benford's Law.

    PubMed

    Park, Junghyun A; Kim, Minki; Yoon, Seokjoon

    2016-05-17

    Sophisticated anti-fraud systems for the healthcare sector have been built based on several statistical methods. Although existing methods have been developed to detect fraud in the healthcare sector, these algorithms consume considerable time and cost, and lack a theoretical basis to handle large-scale data. Based on mathematical theory, this study proposes a new approach to using Benford's Law in that we closely examined the individual-level data to identify specific fees for in-depth analysis. We extended the mathematical theory to demonstrate the manner in which large-scale data conform to Benford's Law. Then, we empirically tested its applicability using actual large-scale healthcare data from Korea's Health Insurance Review and Assessment (HIRA) National Patient Sample (NPS). For Benford's Law, we considered the mean absolute deviation (MAD) formula to test the large-scale data. We conducted our study on 32 diseases, comprising 25 representative diseases and 7 DRG-regulated diseases. We performed an empirical test on 25 diseases, showing the applicability of Benford's Law to large-scale data in the healthcare industry. For the seven DRG-regulated diseases, we examined the individual-level data to identify specific fees to carry out an in-depth analysis. Among the eight categories of medical costs, we considered the strength of certain irregularities based on the details of each DRG-regulated disease. Using the degree of abnormality, we propose priority action to be taken by government health departments and private insurance institutions to bring unnecessary medical expenses under control. However, when we detect deviations from Benford's Law, relatively high contamination ratios are required at conventional significance levels.

  16. Resolving the Circumstellar Environment of the Galactic B[e] Supergiant Star MWC 137 from Large to Small Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Michaela; Nickeler, Dieter H.; Liimets, Tiina

    The Galactic object MWC 137 has been suggested to belong to the group of B[e] supergiants. However, with its large-scale optical bipolar ring nebula and high-velocity jet and knots, it is a rather atypical representative of this class. We performed multiwavelength observations spreading from the optical to the radio regimes. Based on optical imaging and long-slit spectroscopic data, we found that the northern parts of the large-scale nebula are predominantly blueshifted, while the southern regions appear mostly redshifted. We developed a geometrical model consisting of two double cones. Although various observational features can be approximated with such a scenario, themore » observed velocity pattern is more complex. Using near-infrared integral-field unit spectroscopy, we studied the hot molecular gas in the vicinity of the star. The emission from the hot CO gas arises in a small-scale disk revolving around the star on Keplerian orbits. Although the disk itself cannot be spatially resolved, its emission is reflected by the dust arranged in arc-like structures and the clumps surrounding MWC 137 on small scales. In the radio regime, we mapped the cold molecular gas in the outskirts of the optical nebula. We found that large amounts of cool molecular gas and warm dust embrace the optical nebula in the east, south, and west. No cold gas or dust was detected in the north and northwestern regions. Despite the new insights into the nebula kinematics gained from our studies, the real formation scenario of the large-scale nebula remains an open issue.« less

  17. On the relationship between large-scale climate modes and regional synoptic patterns that drive Victorian rainfall

    NASA Astrophysics Data System (ADS)

    Verdon-Kidd, D. C.; Kiem, A. S.

    2009-04-01

    In this paper regional (synoptic) and large-scale climate drivers of rainfall are investigated for Victoria, Australia. A non-linear classification methodology known as self-organizing maps (SOM) is used to identify 20 key regional synoptic patterns, which are shown to capture a range of significant synoptic features known to influence the climate of the region. Rainfall distributions are assigned to each of the 20 patterns for nine rainfall stations located across Victoria, resulting in a clear distinction between wet and dry synoptic types at each station. The influence of large-scale climate modes on the frequency and timing of the regional synoptic patterns is also investigated. This analysis revealed that phase changes in the El Niño Southern Oscillation (ENSO), the Indian Ocean Dipole (IOD) and/or the Southern Annular Mode (SAM) are associated with a shift in the relative frequency of wet and dry synoptic types on an annual to inter-annual timescale. In addition, the relative frequency of synoptic types is shown to vary on a multi-decadal timescale, associated with changes in the Inter-decadal Pacific Oscillation (IPO). Importantly, these results highlight the potential to utilise the link between the regional synoptic patterns derived in this study and large-scale climate modes to improve rainfall forecasting for Victoria, both in the short- (i.e. seasonal) and long-term (i.e. decadal/multi-decadal scale). In addition, the regional and large-scale climate drivers identified in this study provide a benchmark by which the performance of Global Climate Models (GCMs) may be assessed.

  18. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    PubMed

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  19. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  20. Large-Scale Wind Turbine Testing in the NASA 24.4m (80) by 36.6m(120) Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Zell, Peter T.; Imprexia, Cliff (Technical Monitor)

    2000-01-01

    The 80- by 120-Foot Wind Tunnel at NASA Ames Research Center in California provides a unique capability to test large-scale wind turbines under controlled conditions. This special capability is now available for domestic and foreign entities wishing to test large-scale wind turbines. The presentation will focus on facility capabilities to perform wind turbine tests and typical research objectives for this type of testing.

  1. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE PAGES

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  2. Improving parallel I/O autotuning with performance modeling

    DOE PAGES

    Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...

    2014-01-01

    Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less

  3. When Good Intentions and Reality Meet: Large-Scale Reform of Science Teaching in Urban Schools with Predominantly Latino ELL Students

    ERIC Educational Resources Information Center

    Johnson, Carla C.; Bolshakova, Virginia L. J.; Waldron, Tammy

    2016-01-01

    This study examined the ability of Transformative Professional Development (TPD) to transform science teacher quality and associated impact on science achievement, including particular focus on English Language Learners (ELL). TPD was implemented in a large, low-performing, urban district in the southwest with predominantly Latino ELL populations.…

  4. Enhancing Performance of Large-Area Organic Solar Cells with Thick Film via Ternary Strategy.

    PubMed

    Zhang, Jianqi; Zhao, Yifan; Fang, Jin; Yuan, Liu; Xia, Benzheng; Wang, Guodong; Wang, Zaiyu; Zhang, Yajie; Ma, Wei; Yan, Wei; Su, Wenming; Wei, Zhixiang

    2017-06-01

    Large-scale fabrication of organic solar cells requires an active layer with high thickness tolerability and the use of environment-friendly solvents. Thick films with high-performance can be achieved via a ternary strategy studied herein. The ternary system consists of one polymer donor, one small molecule donor, and one fullerene acceptor. The small molecule enhances the crystallinity and face-on orientation of the active layer, leading to improved thickness tolerability compared with that of a polymer-fullerene binary system. An active layer with 270 nm thickness exhibits an average power conversion efficiency (PCE) of 10.78%, while the PCE is less than 8% with such thick film for binary system. Furthermore, large-area devices are successfully fabricated using polyethylene terephthalate (PET)/Silver gride or indium tin oxide (ITO)-based transparent flexible substrates. The product shows a high PCE of 8.28% with an area of 1.25 cm 2 for a single cell and 5.18% for a 20 cm 2 module. This study demonstrates that ternary organic solar cells exhibit great potential for large-scale fabrication and future applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. The effects of streamwise concave curvature on turbulent boundary layer structure

    NASA Astrophysics Data System (ADS)

    Jeans, A. H.; Johnston, J. P.

    1982-06-01

    Concave curvature has a relatively large, unpredictable effect on turbulent boundary layers. Some, but not all previous studies suggest that a large-scale, stationary array of counter-rotating vortices exists within the turbulent boundary layer on a concave wall. The objective of the present study was to obtain a qualitative model of the flow field in order to increase our understanding of the underlying physics. A large free-surface water channel was constructed in order to perform a visual study of the flow. Streamwise components of mean velocity and turbulence intensity were measured using a hot film anemometer. The upstream boundary was spanwise uniform with a momentum thickness to radius of curvature of 0.05. Compared to flat wall flow, large-scale, randomly distributed sweeps and ejections were seen in the boundary layer on the concave wall. The sweeps appear to suppress the normal mechanism for turbulence production near the wall by inhibiting the bursting process. The ejections appear to enhance turbulence production in the outer layers as the low speed fluid convected from regions near the wall interacts with the higher speed fluid farther out. The large-scale structures did not occur at fixed spanwise locations, and could not be called roll cells or vortices.

  6. Performance of Sweetpotato for Bioregenerative Life Support

    NASA Technical Reports Server (NTRS)

    Barta, Daniel J.; Henderson, Keith E.; Mortley, Desmond G.; Henninger, Donald L.

    2001-01-01

    Sweetpotato was successfully grown to harvest maturity in a large-scale atmospherically-closed controlled environment chamber. Yield of edible biomass and capacity for contributing to air revitalization and water recovery were documented. Yield was slightly less than that found in smaller-scale studies, but this is not unusual (Wheeler 1999). Continued work is suggested to improve control of storage root initiation, bulking and vine growth.

  7. Cognitive Performance Decrement in U.S. Army Aircrews.

    DTIC Science & Technology

    1985-08-31

    through his technical insight, patience and understanding of the challenges A associated with large- scale data collection. Inputs from members of... SCALES FOR HELICOPTER TASK TAXONOMY -1-------133 F LITERATURE REVIEW ON TIME ESTIMATION -------- --- 137 F.1 PURPOSE ----------------------- 137 F.2...The Glickman study indi- cates that the time estimation methodology employed by them did a minimal job of discriminating tasks. However, the current

  8. Static analysis techniques for semiautomatic synthesis of message passing software skeletons

    DOE PAGES

    Sottile, Matthew; Dagit, Jason; Zhang, Deli; ...

    2015-06-29

    The design of high-performance computing architectures demands performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a “program skeleton” that we discuss in this article is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed formore » the purposes of the skeleton. In this work, we develop a semiautomatic approach for extracting program skeletons based on compiler program analysis. Finally, we demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator.« less

  9. The Role of Free Stream Turbulence on the Aerodynamic Performance of a Wind Turbine Blade

    NASA Astrophysics Data System (ADS)

    Maldonado, Victor; Thormann, Adrien; Meneveau, Charles; Castillo, Luciano

    2014-11-01

    Effects of free stream turbulence with large integral scale on the aerodynamic performance of an S809 airfoil-based wind turbine blade at low Reynolds number are studied using wind tunnel experiments. A constant chord (2-D) S809 airfoil wind turbine blade model with an operating Reynolds number of 208,000 based on chord length was tested for a range of angles of attack representative of fully attached and stalled flow as encountered in typical wind turbine operation. The smooth-surface blade was subjected to a quasi-laminar free stream with very low free-stream turbulence as well as to elevated free-stream turbulence generated by an active grid. This turbulence contained large-scale eddies with levels of free-stream turbulence intensity of up to 6.14% and an integral length scale of about 60% of chord-length. The pressure distribution was acquired using static pressure taps and the lift was subsequently computed by numerical integration. The wake velocity deficit was measured utilizing hot-wire anemometry to compute the drag coefficient also via integration. In addition, the mean flow was quantified using 2-D particle image velocimetry (PIV) over the suction surface of the blade. Results indicate that turbulence, even with very large-scale eddies comparable in size to the chord-length, significantly improves the aerodynamic performance of the blade by increasing the lift coefficient and overall lift-to-drag ratio, L/D for all angles tested except zero degrees.

  10. Psychometric properties of the feedback orientation scale among South African salespersons.

    PubMed

    Lilford, Neil; Caruana, Albert; Pitt, Leyland

    2014-02-01

    Feedback to employees is an important management tool, and the literature demonstrates that it has a positive effect on learning, motivation, and job performance. This study investigates in a non-U.S. context the psychometric properties of the Feedback Orientation Scale. Data were gathered from a sample of 202 salespersons from a large South African firm within the industrial fuels and lubricants sector. Confirmatory Factor Analysis provided evidence for the intended dimensionality, reliability, and convergent and discriminant validity of the scale.

  11. Seattle wide-area information for travelers (SWIFT) : architecture study

    DOT National Transportation Integrated Search

    1998-10-19

    The SWIFT (Seattle Wide-area Information For Travelers) Field Operational Test was intended to evaluate the performance of a large-scale urban Advanced Traveler Information System (ATIS) deployment in the Seattle area. The unique features of the SWIF...

  12. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less

  13. Investigating the Impact of Job Satisfaction/Dissatisfaction on Iranian English Teachers' Job Performance

    ERIC Educational Resources Information Center

    Afshar, Hassan Soodmand; Doosti, Mehdi

    2016-01-01

    As part of a large-scale project, this study investigated the differences between satisfied and dissatisfied Iranian junior secondary school English teachers in terms of their job performance. To this end, 64 Iranian English teachers and 1774 of their students completed a validated questionnaire specifically developed to investigate EFL teachers'…

  14. Understanding the Relationships between Interest in Online Math Games and Academic Performance

    ERIC Educational Resources Information Center

    Zhang, M.

    2015-01-01

    Although the Internet is widely used by students in both formal and informal environments, little is known about how and where youth spend their time online. Using Internet search and Web analytics data, this study discovered a large-scale phenomenon associated with the poor performance of elementary school students in the USA that has been…

  15. Performance Assessments in Science: Hands-On Tasks and Scoring Guides.

    ERIC Educational Resources Information Center

    Stecher, Brian M.; Klein, Stephen P.

    In 1992, RAND received a grant from the National Science Foundation to study the technical quality of performance assessments in science and to evaluate their feasibility for use in large-scale testing programs. The specific goals of the project were to assess the reliability and validity of hands-on science testing and to investigate the cost and…

  16. A prototype automatic phase compensation module

    NASA Technical Reports Server (NTRS)

    Terry, John D.

    1992-01-01

    The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.

  17. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  18. Statistical Model to Analyze Quantitative Proteomics Data Obtained by 18O/16O Labeling and Linear Ion Trap Mass Spectrometry

    PubMed Central

    Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús

    2009-01-01

    Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins. PMID:19181660

  19. Centrifugal fans: Similarity, scaling laws, and fan performance

    NASA Astrophysics Data System (ADS)

    Sardar, Asad Mohammad

    Centrifugal fans are rotodynamic machines used for moving air continuously against moderate pressures through ventilation and air conditioning systems. There are five major topics presented in this thesis: (1) analysis of the fan scaling laws and consequences of dynamic similarity on modelling; (2) detailed flow visualization studies (in water) covering the flow path starting at the fan blade exit to the evaporator core of an actual HVAC fan scroll-diffuser module; (3) mean velocity and turbulence intensity measurements (flow field studies) at the inlet and outlet of large scale blower; (4) fan installation effects on overall fan performance and evaluation of fan testing methods; (5) two point coherence and spectral measurements conducted on an actual HVAC fan module for flow structure identification of possible aeroacoustic noise sources. A major objective of the study was to identity flow structures within the HVAC module that are responsible for noise and in particular "rumble noise" generation. Possible mechanisms for the generation of flow induced noise in the automotive HVAC fan module are also investigated. It is demonstrated that different modes of HVAC operation represent very different internal flow characteristics. This has implications on both fan HVAC airflow performance and noise characteristics. It is demonstrated from principles of complete dynamic similarity that fan scaling laws require that Reynolds, number matching is a necessary condition for developing scale model fans or fan test facilities. The physical basis for the fan scaling laws derived was established from both pure dimensional analysis and also from the fundamental equations of fluid motion. Fan performance was measured in a three times scale model (large scale blower) in air of an actual forward curved automotive HVAC blower. Different fan testing methods (based on AMCA fan test codes) were compared on the basis of static pressure measurements. Also, the flow through an actual HVAC fan-impeller/diffuser section in water was observed with a flow visualization technique using a shear-thickening dye (in addition to a conventional dye). Full dynamic similarity was maintained between RVAC operation in water as when operated in air. Recommendations are provided both for further investigation of critical flow regions with more sophisticated measurement methods and for improved fan-scroll design to reduce possible aeroacoustic noise with improved aerodynamic performance.

  20. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  1. A large-scale photonic node architecture that utilizes interconnected OXC subsystems.

    PubMed

    Iwai, Yuto; Hasegawa, Hiroshi; Sato, Ken-ichi

    2013-01-14

    We propose a novel photonic node architecture that is composed of interconnected small-scale optical cross-connect subsystems. We also developed an efficient dynamic network control algorithm that complies with a restriction on the number of intra-node fibers used for subsystem interconnection. Numerical evaluations verify that the proposed architecture offers almost the same performance as the equivalent single large-scale cross-connect switch, while enabling substantial hardware scale reductions.

  2. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  3. The UKCAT-12 study: educational attainment, aptitude test performance, demographic and socio-economic contextual factors as predictors of first year outcome in a cross-sectional collaborative study of 12 UK medical schools.

    PubMed

    McManus, I C; Dewberry, Chris; Nicholson, Sandra; Dowell, Jonathan S

    2013-11-14

    Most UK medical schools use aptitude tests during student selection, but large-scale studies of predictive validity are rare. This study assesses the United Kingdom Clinical Aptitude Test (UKCAT), and its four sub-scales, along with measures of educational attainment, individual and contextual socio-economic background factors, as predictors of performance in the first year of medical school training. A prospective study of 4,811 students in 12 UK medical schools taking the UKCAT from 2006 to 2008 as a part of the medical school application, for whom first year medical school examination results were available in 2008 to 2010. UKCAT scores and educational attainment measures (General Certificate of Education (GCE): A-levels, and so on; or Scottish Qualifications Authority (SQA): Scottish Highers, and so on) were significant predictors of outcome. UKCAT predicted outcome better in female students than male students, and better in mature than non-mature students. Incremental validity of UKCAT taking educational attainment into account was significant, but small. Medical school performance was also affected by sex (male students performing less well), ethnicity (non-White students performing less well), and a contextual measure of secondary schooling, students from secondary schools with greater average attainment at A-level (irrespective of public or private sector) performing less well. Multilevel modeling showed no differences between medical schools in predictive ability of the various measures. UKCAT sub-scales predicted similarly, except that Verbal Reasoning correlated positively with performance on Theory examinations, but negatively with Skills assessments. This collaborative study in 12 medical schools shows the power of large-scale studies of medical education for answering previously unanswerable but important questions about medical student selection, education and training. UKCAT has predictive validity as a predictor of medical school outcome, particularly in mature applicants to medical school. UKCAT offers small but significant incremental validity which is operationally valuable where medical schools are making selection decisions based on incomplete measures of educational attainment. The study confirms the validity of using all the existing measures of educational attainment in full at the time of selection decision-making. Contextual measures provide little additional predictive value, except that students from high attaining secondary schools perform less well, an effect previously shown for UK universities in general.

  4. The UKCAT-12 study: educational attainment, aptitude test performance, demographic and socio-economic contextual factors as predictors of first year outcome in a cross-sectional collaborative study of 12 UK medical schools

    PubMed Central

    2013-01-01

    Background Most UK medical schools use aptitude tests during student selection, but large-scale studies of predictive validity are rare. This study assesses the United Kingdom Clinical Aptitude Test (UKCAT), and its four sub-scales, along with measures of educational attainment, individual and contextual socio-economic background factors, as predictors of performance in the first year of medical school training. Methods A prospective study of 4,811 students in 12 UK medical schools taking the UKCAT from 2006 to 2008 as a part of the medical school application, for whom first year medical school examination results were available in 2008 to 2010. Results UKCAT scores and educational attainment measures (General Certificate of Education (GCE): A-levels, and so on; or Scottish Qualifications Authority (SQA): Scottish Highers, and so on) were significant predictors of outcome. UKCAT predicted outcome better in female students than male students, and better in mature than non-mature students. Incremental validity of UKCAT taking educational attainment into account was significant, but small. Medical school performance was also affected by sex (male students performing less well), ethnicity (non-White students performing less well), and a contextual measure of secondary schooling, students from secondary schools with greater average attainment at A-level (irrespective of public or private sector) performing less well. Multilevel modeling showed no differences between medical schools in predictive ability of the various measures. UKCAT sub-scales predicted similarly, except that Verbal Reasoning correlated positively with performance on Theory examinations, but negatively with Skills assessments. Conclusions This collaborative study in 12 medical schools shows the power of large-scale studies of medical education for answering previously unanswerable but important questions about medical student selection, education and training. UKCAT has predictive validity as a predictor of medical school outcome, particularly in mature applicants to medical school. UKCAT offers small but significant incremental validity which is operationally valuable where medical schools are making selection decisions based on incomplete measures of educational attainment. The study confirms the validity of using all the existing measures of educational attainment in full at the time of selection decision-making. Contextual measures provide little additional predictive value, except that students from high attaining secondary schools perform less well, an effect previously shown for UK universities in general. PMID:24229380

  5. An exploratory, large-scale study of pain and quality of life outcomes in cancer patients with moderate or severe pain, and variables predicting improvement.

    PubMed

    Maximiano, Constanza; López, Iker; Martín, Cristina; Zugazabeitia, Luis; Martí-Ciriquián, Juan L; Núñez, Miguel A; Contreras, Jorge; Herdman, Michael; Traseira, Susana; Provencio, Mariano

    2018-01-01

    There have been few large-scale, real world studies in Spain to assess change in pain and quality of life (QOL) outcomes in cancer patients with moderate to severe pain. This study aimed to assess changes on both outcomes after 3 months of usual care and to investigate factors associated with change in QoL. Large, multi-centre, observational study in patients with lung, head and neck, colorectal or breast cancer experiencing a first episode of moderate to severe pain while attending one of the participating centres. QoL was assessed using the EuroQol-5D questionnaire and pain using the Brief Pain Inventory (BPI). Instruments were administered at baseline and after 3 months of follow up. Multivariate analyses were used to assess the impact of treatment factors, demographic and clinical variables, pain and other symptoms on QoL scores. 1711 patients were included for analysis. After 3 months of usual care, a significant improvement was observed in pain and QoL in all four cancer groups (p<0.001). Effect sizes were medium to large on the BPI and EQ-5D Index and Visual Analogue Scale (VAS). Improvements were seen on the majority of EQ-5D dimensions in all patient groups, though breast cancer patients showed the largest gains. Poorer baseline performance status (ECOG) and the presence of anxiety/depression were associated with significantly poorer QOL outcomes. Improvements in BPI pain scores were associated with improved QoL. In the four cancer types studied, pain and QoL outcomes improved considerably after 3 months of usual care. Improvements in pain made a substantial contribution to QoL gains whilst the presence of anxiety and depression and poor baseline performance status significantly constrained improvement.

  6. Fabrication and performance analysis of 4-sq cm indium tin oxide/InP photovoltaic solar cells

    NASA Technical Reports Server (NTRS)

    Gessert, T. A.; Li, X.; Phelps, P. W.; Coutts, T. J.; Tzafaras, N.

    1991-01-01

    Large-area photovoltaic solar cells based on direct current magnetron sputter deposition of indium tin oxide (ITO) into single-crystal p-InP substrates demonstrated both the radiation hardness and high performance necessary for extraterrestrial applications. A small-scale production project was initiated in which approximately 50 ITO/InP cells are being produced. The procedures used in this small-scale production of 4-sq cm ITO/InP cells are presented and discussed. The discussion includes analyses of performance range of all available production cells, and device performance data of the best cells thus far produced. Additionally, processing experience gained from the production of these cells is discussed, indicating other issues that may be encountered when large-scale productions are begun.

  7. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  8. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    DOE PAGES

    Ceurvorst, L.; Savin, A.; Ratan, N.; ...

    2018-04-20

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (> 10 18 W/cm 2) kilojoule laser pulses through large density scale length (~ 390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse’s focal location and intensity as well as the plasma’s temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities as expected. However, contrary to previous large scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer duration equivalents. To conclude, this new observation has manymore » implications for future laser-plasma research in the relativistic regime.« less

  9. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceurvorst, L.; Savin, A.; Ratan, N.

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (> 10 18 W/cm 2) kilojoule laser pulses through large density scale length (~ 390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse’s focal location and intensity as well as the plasma’s temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities as expected. However, contrary to previous large scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer duration equivalents. To conclude, this new observation has manymore » implications for future laser-plasma research in the relativistic regime.« less

  10. Large-scale linear rankSVM.

    PubMed

    Lee, Ching-Pei; Lin, Chih-Jen

    2014-04-01

    Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use.

  11. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  12. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  13. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  14. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  15. Physical activity correlates with neurological impairment and disability in multiple sclerosis.

    PubMed

    Motl, Robert W; Snook, Erin M; Wynn, Daniel R; Vollmer, Timothy

    2008-06-01

    This study examined the correlation of physical activity with neurological impairment and disability in persons with multiple sclerosis (MS). Eighty individuals with MS wore an accelerometer for 7 days and completed the Symptom Inventory (SI), Performance Scales (PS), and Expanded Disability Status Scale. There were large negative correlations between the accelerometer and SI (r = -0.56; rho = -0.58) and Expanded Disability Status Scale (r = -0.60; rho = -0.69) and a moderate negative correlation between the accelerometer and PS (r = -0.39; rho = -0.48) indicating that physical activity was associated with reduced neurological impairment and disability. Such findings provide a preliminary basis for using an accelerometer and the SI and PS as outcome measures in large-scale prospective and experimental examinations of the effect of physical activity behavior on disability and dependence in MS.

  16. A study of rotor and platform design trade-offs for large-scale floating vertical axis wind turbines

    NASA Astrophysics Data System (ADS)

    Griffith, D. Todd; Paquette, Joshua; Barone, Matthew; Goupee, Andrew J.; Fowler, Matthew J.; Bull, Diana; Owens, Brian

    2016-09-01

    Vertical axis wind turbines are receiving significant attention for offshore siting. In general, offshore wind offers proximity to large populations centers, a vast & more consistent wind resource, and a scale-up opportunity, to name a few beneficial characteristics. On the other hand, offshore wind suffers from high levelized cost of energy (LCOE) and in particular high balance of system (BoS) costs owing to accessibility challenges and limited project experience. To address these challenges associated with offshore wind, Sandia National Laboratories is researching large-scale (MW class) offshore floating vertical axis wind turbines (VAWTs). The motivation for this work is that floating VAWTs are a potential transformative technology solution to reduce offshore wind LCOE in deep-water locations. This paper explores performance and cost trade-offs within the design space for floating VAWTs between the configurations for the rotor and platform.

  17. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  18. Performance Assessment of a Large Scale Pulsejet- Driven Ejector System

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Litke, Paul J.; Schauer, Frederick R.; Bradley, Royce P.; Hoke, John L.

    2006-01-01

    Unsteady thrust augmentation was measured on a large scale driver/ejector system. A 72 in. long, 6.5 in. diameter, 100 lb(sub f) pulsejet was tested with a series of straight, cylindrical ejectors of varying length, and diameter. A tapered ejector configuration of varying length was also tested. The objectives of the testing were to determine the dimensions of the ejectors which maximize thrust augmentation, and to compare the dimensions and augmentation levels so obtained with those of other, similarly maximized, but smaller scale systems on which much of the recent unsteady ejector thrust augmentation studies have been performed. An augmentation level of 1.71 was achieved with the cylindrical ejector configuration and 1.81 with the tapered ejector configuration. These levels are consistent with, but slightly lower than the highest levels achieved with the smaller systems. The ejector diameter yielding maximum augmentation was 2.46 times the diameter of the pulsejet. This ratio closely matches those of the small scale experiments. For the straight ejector, the length yielding maximum augmentation was 10 times the diameter of the pulsejet. This was also nearly the same as the small scale experiments. Testing procedures are described, as are the parametric variations in ejector geometry. Results are discussed in terms of their implications for general scaling of pulsed thrust ejector systems

  19. Response of a 2-story test-bed structure for the seismic evaluation of nonstructural systems

    NASA Astrophysics Data System (ADS)

    Soroushian, Siavash; Maragakis, E. "Manos"; Zaghi, Arash E.; Rahmanishamsi, Esmaeel; Itani, Ahmad M.; Pekcan, Gokhan

    2016-03-01

    A full-scale, two-story, two-by-one bay, steel braced-frame was subjected to a number of unidirectional ground motions using three shake tables at the UNR-NEES site. The test-bed frame was designed to study the seismic performance of nonstructural systems including steel-framed gypsum partition walls, suspended ceilings and fire sprinkler systems. The frame can be configured to perform as an elastic or inelastic system to generate large floor accelerations or large inter story drift, respectively. In this study, the dynamic performance of the linear and nonlinear test-beds was comprehensively studied. The seismic performance of nonstructural systems installed in the linear and nonlinear test-beds were assessed during extreme excitations. In addition, the dynamic interactions of the test-bed and installed nonstructural systems are investigated.

  20. DIF Analysis with Multilevel Data: A Simulation Study Using the Latent Variable Approach

    ERIC Educational Resources Information Center

    Jin, Ying; Eason, Hershel

    2016-01-01

    The effects of mean ability difference (MAD) and short tests on the performance of various DIF methods have been studied extensively in previous simulation studies. Their effects, however, have not been studied under multilevel data structure. MAD was frequently observed in large-scale cross-country comparison studies where the primary sampling…

  1. Linguistic Simplification of Mathematics Items: Effects for Language Minority Students in Germany

    ERIC Educational Resources Information Center

    Haag, Nicole; Heppt, Birgit; Roppelt, Alexander; Stanat, Petra

    2015-01-01

    In large-scale assessment studies, language minority students typically obtain lower test scores in mathematics than native speakers. Although this performance difference was related to the linguistic complexity of test items in some studies, other studies did not find linguistically demanding math items to be disproportionally more difficult for…

  2. Scaling properties of the Arctic sea ice Deformation from Buoy Dispersion Analysis

    NASA Astrophysics Data System (ADS)

    Weiss, J.; Rampal, P.; Marsan, D.; Lindsay, R.; Stern, H.

    2007-12-01

    A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over time scales from 3 hours to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of a very large solid plate -the Arctic sea ice cover- stressed by heterogeneous forcing terms like winds and ocean currents. It shows that the sea ice deformation rate depends on the scales of observation following specific space and time scaling laws. These scaling properties share similarities with those observed for turbulent fluids, especially for the ocean and the atmosphere. However, in our case, the time scaling exponent depends on the spatial scale, and the spatial exponent on the temporal scale, which implies a time/space coupling. An analysis of the exponent values shows that Arctic sea ice deformation is very heterogeneous and intermittent whatever the scales, i.e. it cannot be considered as viscous-like, even at very large time and/or spatial scales. Instead, it suggests a deformation accommodated by a multi-scale fracturing/faulting processes.

  3. Is University Internationalization Bad for Performance? Examining Two Different Types of Diversity

    ERIC Educational Resources Information Center

    Lauring, Jakob; Selmer, Jan

    2010-01-01

    While most studies on diverse organizations have been directed at business organizations, the academic sector has also become increasingly international and heterogeneous. Few large-scale studies have investigated multicultural academic departments and none of those have dealt with diversity and group processes. Therefore, a survey was directed…

  4. Evaluating Large-Scale Studies to Accurately Appraise Children's Performance

    ERIC Educational Resources Information Center

    Ernest, James M.

    2012-01-01

    Educational policy is often developed using a top-down approach. Recently, there has been a concerted shift in policy for educators to develop programs and research proposals that evolve from "scientific" studies and focus less on their intuition, aided by professional wisdom. This article analyzes several national and international…

  5. Mining the Mind Research Network: A Novel Framework for Exploring Large Scale, Heterogeneous Translational Neuroscience Research Data Sources

    PubMed Central

    Bockholt, Henry J.; Scully, Mark; Courtney, William; Rachakonda, Srinivas; Scott, Adam; Caprihan, Arvind; Fries, Jill; Kalyanam, Ravi; Segall, Judith M.; de la Garza, Raul; Lane, Susan; Calhoun, Vince D.

    2009-01-01

    A neuroinformatics (NI) system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN), database system has been designed and improved through our experience with 200 research studies and 250 researchers from seven different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining. PMID:20461147

  6. Local properties of the large-scale peaks of the CMB temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcos-Caballero, A.; Martínez-González, E.; Vielva, P., E-mail: marcos@ifca.unican.es, E-mail: martinez@ifca.unican.es, E-mail: vielva@ifca.unican.es

    2017-05-01

    In the present work, we study the largest structures of the CMB temperature measured by Planck in terms of the most prominent peaks on the sky, which, in particular, are located in the southern galactic hemisphere. Besides these large-scale features, the well-known Cold Spot anomaly is included in the analysis. All these peaks would contribute significantly to some of the CMB large-scale anomalies, as the parity and hemispherical asymmetries, the dipole modulation, the alignment between the quadrupole and the octopole, or in the case of the Cold Spot, to the non-Gaussianity of the field. The analysis of the peaks ismore » performed by using their multipolar profiles, which characterize the local shape of the peaks in terms of the discrete Fourier transform of the azimuthal angle. In order to quantify the local anisotropy of the peaks, the distribution of the phases of the multipolar profiles is studied by using the Rayleigh random walk methodology. Finally, a direct analysis of the 2-dimensional field around the peaks is performed in order to take into account the effect of the galactic mask. The results of the analysis conclude that, once the peak amplitude and its first and second order derivatives at the centre are conditioned, the rest of the field is compatible with the standard model. In particular, it is observed that the Cold Spot anomaly is caused by the large value of curvature at the centre.« less

  7. The influence of cognitive load on spatial search performance.

    PubMed

    Longstaffe, Kate A; Hood, Bruce M; Gilchrist, Iain D

    2014-01-01

    During search, executive function enables individuals to direct attention to potential targets, remember locations visited, and inhibit distracting information. In the present study, we investigated these executive processes in large-scale search. In our tasks, participants searched a room containing an array of illuminated locations embedded in the floor. The participants' task was to press the switches at the illuminated locations on the floor so as to locate a target that changed color when pressed. The perceptual salience of the search locations was manipulated by having some locations flashing and some static. Participants were more likely to search at flashing locations, even when they were explicitly informed that the target was equally likely to be at any location. In large-scale search, attention was captured by the perceptual salience of the flashing lights, leading to a bias to explore these targets. Despite this failure of inhibition, participants were able to restrict returns to previously visited locations, a measure of spatial memory performance. Participants were more able to inhibit exploration to flashing locations when they were not required to remember which locations had previously been visited. A concurrent digit-span memory task further disrupted inhibition during search, as did a concurrent auditory attention task. These experiments extend a load theory of attention to large-scale search, which relies on egocentric representations of space. High cognitive load on working memory leads to increased distractor interference, providing evidence for distinct roles for the executive subprocesses of memory and inhibition during large-scale search.

  8. Seattle wide-area information for travelers (SWIFT) : consumer acceptance study

    DOT National Transportation Integrated Search

    1998-10-19

    The Seattle Wide-area Information for Travelers (SWIFT) 0perational Test was intended to evaluate the performance of a large-scale, urban Advanced Traveler Information System (ATIS) deployment in the Seattle area. With the majority of the SWIFT syste...

  9. Geomorphic analysis of large alluvial rivers

    NASA Astrophysics Data System (ADS)

    Thorne, Colin R.

    2002-05-01

    Geomorphic analysis of a large river presents particular challenges and requires a systematic and organised approach because of the spatial scale and system complexity involved. This paper presents a framework and blueprint for geomorphic studies of large rivers developed in the course of basic, strategic and project-related investigations of a number of large rivers. The framework demonstrates the need to begin geomorphic studies early in the pre-feasibility stage of a river project and carry them through to implementation and post-project appraisal. The blueprint breaks down the multi-layered and multi-scaled complexity of a comprehensive geomorphic study into a number of well-defined and semi-independent topics, each of which can be performed separately to produce a clearly defined, deliverable product. Geomorphology increasingly plays a central role in multi-disciplinary river research and the importance of effective quality assurance makes it essential that audit trails and quality checks are hard-wired into study design. The structured approach presented here provides output products and production trails that can be rigorously audited, ensuring that the results of a geomorphic study can stand up to the closest scrutiny.

  10. A first large-scale flood inundation forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less

  11. Fire extinguishing tests -80 with methyl alcohol gasoline

    NASA Astrophysics Data System (ADS)

    Holmstedt, G.; Ryderman, A.; Carlsson, B.; Lennmalm, B.

    1980-10-01

    Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15 fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results; and performances with small pools can hardly be correlated with results from large scale fires.

  12. SVM and SVM Ensembles in Breast Cancer Prediction.

    PubMed

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.

  13. SVM and SVM Ensembles in Breast Cancer Prediction

    PubMed Central

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807

  14. The island coalescence problem: Scaling of reconnection in extended fluid models including higher-order moments

    DOE PAGES

    Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...

    2015-11-05

    As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less

  15. Study of an engine flow diverter system for a large scale ejector powered aircraft model

    NASA Technical Reports Server (NTRS)

    Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.

    1981-01-01

    Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.

  16. An innovative large scale integration of silicon nanowire-based field effect transistors

    NASA Astrophysics Data System (ADS)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  17. Spatial scale and distribution of neurovascular signals underlying decoding of orientation and eye of origin from fMRI data

    PubMed Central

    Harrison, Charlotte; Jackson, Jade; Oh, Seung-Mock; Zeringyte, Vaida

    2016-01-01

    Multivariate pattern analysis of functional magnetic resonance imaging (fMRI) data is widely used, yet the spatial scales and origin of neurovascular signals underlying such analyses remain unclear. We compared decoding performance for stimulus orientation and eye of origin from fMRI measurements in human visual cortex with predictions based on the columnar organization of each feature and estimated the spatial scales of patterns driving decoding. Both orientation and eye of origin could be decoded significantly above chance in early visual areas (V1–V3). Contrary to predictions based on a columnar origin of response biases, decoding performance for eye of origin in V2 and V3 was not significantly lower than that in V1, nor did decoding performance for orientation and eye of origin differ significantly. Instead, response biases for both features showed large-scale organization, evident as a radial bias for orientation, and a nasotemporal bias for eye preference. To determine whether these patterns could drive classification, we quantified the effect on classification performance of binning voxels according to visual field position. Consistent with large-scale biases driving classification, binning by polar angle yielded significantly better decoding performance for orientation than random binning in V1–V3. Similarly, binning by hemifield significantly improved decoding performance for eye of origin. Patterns of orientation and eye preference bias in V2 and V3 showed a substantial degree of spatial correlation with the corresponding patterns in V1, suggesting that response biases in these areas originate in V1. Together, these findings indicate that multivariate classification results need not reflect the underlying columnar organization of neuronal response selectivities in early visual areas. NEW & NOTEWORTHY Large-scale response biases can account for decoding of orientation and eye of origin in human early visual areas V1–V3. For eye of origin this pattern is a nasotemporal bias; for orientation it is a radial bias. Differences in decoding performance across areas and stimulus features are not well predicted by differences in columnar-scale organization of each feature. Large-scale biases in extrastriate areas are spatially correlated with those in V1, suggesting biases originate in primary visual cortex. PMID:27903637

  18. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  19. Computational biomedicine: a challenge for the twenty-first century.

    PubMed

    Coveney, Peter V; Shublaq, Nour W

    2012-01-01

    With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.

  20. Participation in International Large-Scale Assessments from a US Perspective

    ERIC Educational Resources Information Center

    Plisko, Valena White

    2013-01-01

    International large-scale assessments (ILSAs) play a distinct role in the United States' decentralized federal education system. Separate from national and state assessments, they offer an external, objective measure for the United States to assess student performance comparatively with other countries and over time. The US engagement in ILSAs…

  1. Science Competencies That Go Unassessed

    ERIC Educational Resources Information Center

    Gilmer, Penny J.; Sherdan, Danielle M.; Oosterhof, Albert; Rohani, Faranak; Rouby, Aaron

    2011-01-01

    Present large-scale assessments require the use of item formats, such as multiple choice, that can be administered and scored efficiently. This limits competencies that can be measured by these assessments. An alternative approach to large-scale assessments is being investigated that would include the use of complex performance assessments. As…

  2. Towards Productive Critique of Large-Scale Comparisons in Education

    ERIC Educational Resources Information Center

    Gorur, Radhika

    2017-01-01

    International large-scale assessments and comparisons (ILSAs) in education have become significant policy phenomena. How a country fares in these assessments has come to signify not only how a nation's education system is performing, but also its future prospects in a global economic "race". These assessments provoke passionate arguments…

  3. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  4. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  5. Hybrid Wing Body Configuration Scaling Study

    NASA Technical Reports Server (NTRS)

    Nickol, Craig L.

    2012-01-01

    The Hybrid Wing Body (HWB) configuration is a subsonic transport aircraft concept with the potential to simultaneously reduce fuel burn, noise and emissions compared to conventional concepts. Initial studies focused on very large applications with capacities for up to 800 passengers. More recent studies have focused on the large, twin-aisle class with passenger capacities in the 300-450 range. Efficiently scaling this concept down to the single aisle or smaller size is challenging due to geometric constraints, potentially reducing the desirability of this concept for applications in the 100-200 passenger capacity range or less. In order to quantify this scaling challenge, five advanced conventional (tube-and-wing layout) concepts were developed, along with equivalent (payload/range/technology) HWB concepts, and their fuel burn performance compared. The comparison showed that the HWB concepts have fuel burn advantages over advanced tube-and-wing concepts in the larger payload/range classes (roughly 767-sized and larger). Although noise performance was not quantified in this study, the HWB concept has distinct noise advantages over the conventional tube-and-wing configuration due to the inherent noise shielding features of the HWB. NASA s Environmentally Responsible Aviation (ERA) project will continue to investigate advanced configurations, such as the HWB, due to their potential to simultaneously reduce fuel burn, noise and emissions.

  6. Monitoring Million Trees LA: Tree performance during the early years and future benefits

    Treesearch

    E. Gregory McPherson

    2014-01-01

    Million Trees LA (MTLA) is one of several large-scale mayoral tree planting initiatives striving to create more livable cities through urban forestry. This study combined field sampling of tree survival and growth with numerical modeling of future benefits to assess performance of MTLA plantings. From 2006 to 2010 MTLA planted a diverse mix of 91,786 trees....

  7. Political Socialization in the Family and Young People's Educational Achievement and Ambition

    ERIC Educational Resources Information Center

    Lauglo, Jon

    2011-01-01

    Using data from a large-scale survey of Norwegian youth, the study examines the relationships that the performance in key academic subjects and aiming for higher education have with political socialization at home. The more often adolescents aged 13-16 talk with parents about politics and social issues, the better their performance is and the more…

  8. Formative Assessment for College Readiness: Measuring Skill and Growth in Five Key Cognitive Strategies Associated with Postsecondary Success

    ERIC Educational Resources Information Center

    Conley, David; Lombardi, Allison; Seburn, Mary; McGaughy, Charis

    2009-01-01

    This study reports the preliminary results from a field test of the College-readiness Performance Assessment System (C-PAS), a large-scale, 6th-12th grade criterion-referenced assessment system that utilizes classroom-embedded performance tasks to measure student progress toward the development of key cognitive skills associated with success in…

  9. An Investigation of School-Level Factors Associated with Science Performance for Minority and Majority Francophone Students in Canada

    ERIC Educational Resources Information Center

    Sandilands, Debra; McKeown, Stephanie Barclay; Lyons-Thomas, Juliette; Ercikan, Kadriye

    2014-01-01

    Minority Francophone students in predominantly English-speaking Canadian provinces tend to perform lower on large-scale assessments of achievement than their Anglophone peers and majority Francophone students in Quebec. This study is the first to apply multilevel modeling methods to examine the extent to which school-level factors may be…

  10. On the large eddy simulation of turbulent flows in complex geometry

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip

    1993-01-01

    Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.

  11. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  12. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  13. Scaling properties of sea ice deformation from buoy dispersion analysis

    NASA Astrophysics Data System (ADS)

    Rampal, P.; Weiss, J.; Marsan, D.; Lindsay, R.; Stern, H.

    2008-03-01

    A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over timescales from 3 h to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of a very large solid plate (the Arctic sea ice cover) stressed by heterogeneous forcing terms like winds and ocean currents. It shows that the sea ice deformation rate depends on the scales of observation following specific space and time scaling laws. These scaling properties share similarities with those observed for turbulent fluids, especially for the ocean and the atmosphere. However, in our case, the time scaling exponent depends on the spatial scale, and the spatial exponent on the temporal scale, which implies a time/space coupling. An analysis of the exponent values shows that Arctic sea ice deformation is very heterogeneous and intermittent whatever the scales, i.e., it cannot be considered as viscous-like, even at very large time and/or spatial scales. Instead, it suggests a deformation accommodated by a multiscale fracturing/faulting processes.

  14. How large a dataset should be in order to estimate scaling exponents and other statistics correctly in studies of solar wind turbulence

    NASA Astrophysics Data System (ADS)

    Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.

    2009-12-01

    Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).

  15. Effects of local and large-scale climate patterns on estuarine resident fishes: The example of Pomatoschistus microps and Pomatoschistus minutus

    NASA Astrophysics Data System (ADS)

    Nyitrai, Daniel; Martinho, Filipe; Dolbeth, Marina; Rito, João; Pardal, Miguel A.

    2013-12-01

    Large-scale and local climate patterns are known to influence several aspects of the life cycle of marine fish. In this paper, we used a 9-year database (2003-2011) to analyse the populations of two estuarine resident fishes, Pomatoschistus microps and Pomatoschistus minutus, in order to determine their relationships with varying environmental stressors operating over local and large scales. This study was performed in the Mondego estuary, Portugal. Firstly, the variations in abundance, growth, population structure and secondary production were evaluated. These species appeared in high densities in the beginning of the study period, with subsequent occasional high annual density peaks, while their secondary production was lower in dry years. The relationships between yearly fish abundance and the environmental variables were evaluated separately for both species using Spearman correlation analysis, considering the yearly abundance peaks for the whole population, juveniles and adults. Among the local climate patterns, precipitation, river runoff, salinity and temperature were used in the analyses, and North Atlantic Oscillation (NAO) index and sea surface temperature (SST) were tested as large-scale factors. For P. microps, precipitation and NAO were the significant factors explaining abundance of the whole population, the adults and the juveniles as well. Regarding P. minutus, for the whole population, juveniles and adults river runoff was the significant predictor. The results for both species suggest a differential influence of climate patterns on the various life cycle stages, confirming also the importance of estuarine resident fishes as indicators of changes in local and large-scale climate patterns, related to global climate change.

  16. Three-Year Evaluation of a Large Scale Early Grade French Immersion Program: The Ottawa Study

    ERIC Educational Resources Information Center

    Barik, Henri; Swain, Marrill

    1975-01-01

    The school performance of pupils in grades K-2 of the French immersion program in operation in Ottawa public schools is evaluated in comparison with that of pupils in the regular English program. (Author/RM)

  17. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  18. Bursting process of large- and small-scale structures in turbulent boundary layer perturbed by a cylinder roughness element

    NASA Astrophysics Data System (ADS)

    Tang, Zhanqi; Jiang, Nan; Zheng, Xiaobo; Wu, Yanhua

    2016-05-01

    Hot-wire measurements on a turbulent boundary layer flow perturbed by a wall-mounted cylinder roughness element (CRE) are carried out in this study. The cylindrical element protrudes into the logarithmic layer, which is similar to those employed in turbulent boundary layers by Ryan et al. (AIAA J 49:2210-2220, 2011. doi: 10.2514/1.j051012) and Zheng and Longmire (J Fluid Mech 748:368-398, 2014. doi: 10.1017/jfm.2014.185) and in turbulent channel flow by Pathikonda and Christensen (AIAA J 53:1-10, 2014. doi: 10.2514/1.j053407). The similar effects on both the mean velocity and Reynolds stress are observed downstream of the CRE perturbation. The series of hot-wire data are decomposed into large- and small-scale fluctuations, and the characteristics of large- and small-scale bursting process are observed, by comparing the bursting duration, period and frequency between CRE-perturbed case and unperturbed case. It is indicated that the CRE perturbation performs the significant impact on the large- and small-scale structures, but within the different impact scenario. Moreover, the large-scale bursting process imposes a modulation on the bursting events of small-scale fluctuations and the overall trend of modulation is not essentially sensitive to the present CRE perturbation, even the modulation extent is modified. The conditionally averaging fluctuations are also plotted, which further confirms the robustness of the bursting modulation in the present experiments.

  19. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  20. Scalable Performance Measurement and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less

  1. Wafer-size free-standing single-crystalline graphene device arrays

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jing, Gaoshan; Zhang, Bo; Sando, Shota; Cui, Tianhong

    2014-08-01

    We report an approach of wafer-scale addressable single-crystalline graphene (SCG) arrays growth by using pre-patterned seeds to control the nucleation. The growth mechanism and superb properties of SCG were studied. Large array of free-standing SCG devices were realized. Characterization of SCG as nano switches shows excellent performance with life time (>22 000 times) two orders longer than that of other graphene nano switches reported so far. This work not only shows the possibility of producing wafer-scale high quality SCG device arrays but also explores the superb performance of SCG as nano devices.

  2. Ultrasonically Absorptive Coatings for Hypersonic Laminar Flow Control

    DTIC Science & Technology

    2007-12-01

    integratt JAC and TPS functions. To aid in the design of UAC with regular microstructure to be tested the CUBRC LENS I tunnel, parametric studies of th...solid foundation for large-scale demonstration of the UAC-LFC performance the CUBRC LENS I -tnel as wel as fabrication of ceramic UAC samples...with regular microstructure to be tested the CUBRC LENS I tunnel, extensive parametric studies of the UAC laminar flow control performance were conducted

  3. From lab to full-scale ultrafiltration in microalgae harvesting

    NASA Astrophysics Data System (ADS)

    Wenten, I. G.; Steven, S.; Dwiputra, A.; Khoiruddin; Hakim, A. N.

    2017-07-01

    Ponding system is generally used for microalgae cultivation. However, selection of appropriate technology for the harvesting process is challenging due to the low cell density of cultivated microalgae from the ponding system and the large volume of water to be handled. One of the promising technologies for microalgae harvesting is ultrafiltration (UF). In this study, the performance of UF during harvesting of microalgae in a lab- and a full-scale test is investigated. The performances of both scales are compared and analyzed to provide an understanding of several aspects which affect the yield produced from lab and actual conditions. Furthermore, a unique self-standing non-modular UF is introduced in the full-scale test. The non-modular UF exhibits several advantages, such as simple piping and connection, single pump for filtration and backwashing, and smaller footprint. With those advantages, the non-modular UF could be a promising technology for microalgae harvesting in industrial-scale.

  4. Extending large-scale forest inventories to assess urban forests.

    PubMed

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter

    2012-03-01

    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  5. Data management in large-scale collaborative toxicity studies: how to file experimental data for automated statistical analysis.

    PubMed

    Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette

    2013-06-01

    High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. On the role of minicomputers in structural design

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1977-01-01

    Results are presented of exploratory studies on the use of a minicomputer in conjunction with large-scale computers to perform structural design tasks, including data and program management, use of interactive graphics, and computations for structural analysis and design. An assessment is made of minicomputer use for the structural model definition and checking and for interpreting results. Included are results of computational experiments demonstrating the advantages of using both a minicomputer and a large computer to solve a large aircraft structural design problem.

  7. Integration and Analysis of Neighbor Discovery and Link Quality Estimation in Wireless Sensor Networks

    PubMed Central

    Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor

    2014-01-01

    Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277

  8. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets

    PubMed Central

    2013-01-01

    Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (<0.1 log units RMSE difference and <0.1 difference in MCC), while errors for individual proteins were in some cases found to be larger than those resulting from descriptor set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that choosing an appropriate descriptor set is of fundamental for bioactivity modeling, both from the ligand- as well as the protein side. PMID:24059743

  9. Approximate Computing Techniques for Iterative Graph Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less

  10. Scale Up of Malonic Acid Fermentation Process: Cooperative Research and Development Final Report, CRADA Number CRD-16-612

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schell, Daniel J

    The goal of this work is to use the large fermentation vessels in the National Renewable Energy Laboratory's (NREL) Integrated Biorefinery Research Facility (IBRF) to scale-up Lygos' biological-based process for producing malonic acid and to generate performance data. Initially, work at the 1 L scale validated successful transfer of Lygos' fermentation protocols to NREL using a glucose substrate. Outside of the scope of the CRADA with NREL, Lygos tested their process on lignocellulosic sugars produced by NREL at Lawrence Berkeley National Laboratory's (LBNL) Advanced Biofuels Process Development Unit (ABPDU). NREL produced these cellulosic sugar solutions from corn stover using amore » separate cellulose/hemicellulose process configuration. Finally, NREL performed fermentations using glucose in large fermentors (1,500- and 9,000-L vessels) to intermediate product and to demonstrate successful performance of Lygos' technology at larger scales.« less

  11. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  12. Quantitative analysis of voids in percolating structures in two-dimensional N-body simulations

    NASA Technical Reports Server (NTRS)

    Harrington, Patrick M.; Melott, Adrian L.; Shandarin, Sergei F.

    1993-01-01

    We present in this paper a quantitative method for defining void size in large-scale structure based on percolation threshold density. Beginning with two-dimensional gravitational clustering simulations smoothed to the threshold of nonlinearity, we perform percolation analysis to determine the large scale structure. The resulting objective definition of voids has a natural scaling property, is topologically interesting, and can be applied immediately to redshift surveys.

  13. Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.

    PubMed

    Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M

    2017-02-02

    Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.

  14. Functional Connectivity in Multiple Cortical Networks Is Associated with Performance Across Cognitive Domains in Older Adults.

    PubMed

    Shaw, Emily E; Schultz, Aaron P; Sperling, Reisa A; Hedden, Trey

    2015-10-01

    Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65-90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging.

  15. A Cross-Cultural Comparison of Student Learning Patterns in Higher Education

    ERIC Educational Resources Information Center

    Marambe, Kosala N.; Vermunt, Jan D.; Boshuizen, Henny P. A.

    2012-01-01

    The aim of this study was to compare student learning patterns in higher education across different cultures. A meta-analysis was performed on three large-scale studies that had used the same research instrument: the Inventory of learning Styles (ILS). The studies were conducted in the two Asian countries Sri Lanka and Indonesia and the European…

  16. Large-Scale, Three–Dimensional, Free–Standing, and Mesoporous Metal Oxide Networks for High–Performance Photocatalysis

    PubMed Central

    Bai, Hua; Li, Xinshi; Hu, Chao; Zhang, Xuan; Li, Junfang; Yan, Yan; Xi, Guangcheng

    2013-01-01

    Mesoporous nanostructures represent a unique class of photocatalysts with many applications, including splitting of water, degradation of organic contaminants, and reduction of carbon dioxide. In this work, we report a general Lewis acid catalytic template route for the high–yield producing single– and multi–component large–scale three–dimensional (3D) mesoporous metal oxide networks. The large-scale 3D mesoporous metal oxide networks possess large macroscopic scale (millimeter–sized) and mesoporous nanostructure with huge pore volume and large surface exposure area. This method also can be used for the synthesis of large–scale 3D macro/mesoporous hierarchical porous materials and noble metal nanoparticles loaded 3D mesoporous networks. Photocatalytic degradation of Azo dyes demonstrated that the large–scale 3D mesoporous metal oxide networks enable high photocatalytic activity. The present synthetic method can serve as the new design concept for functional 3D mesoporous nanomaterials. PMID:23857595

  17. A priori and a posteriori investigations for developing large eddy simulations of multi-species turbulent mixing under high-pressure conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099

    2015-03-15

    A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less

  18. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    NASA Technical Reports Server (NTRS)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  19. A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.

    PubMed

    Halloran, John T; Rocke, David M

    2018-05-04

    Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .

  20. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    NASA Astrophysics Data System (ADS)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  1. Breaking down the Bilingual Cost in Speech Production

    ERIC Educational Resources Information Center

    Sadat, Jasmin; Martin, Clara D.; Magnuson, James S.; Alario, François-Xavier; Costa, Albert

    2016-01-01

    Bilinguals have been shown to perform worse than monolinguals in a variety of verbal tasks. This study investigated this bilingual verbal cost in a large-scale picture-naming study conducted in Spanish. We explored how individual characteristics of the participants and the linguistic properties of the words being spoken influence this performance…

  2. Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. NCEE 2012-4025

    ERIC Educational Resources Information Center

    Gleason, Philip M.; Resch, Alexandra M.; Berk, Jillian A.

    2012-01-01

    This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials--the Education Technology Evaluation and the Teach for America Study--to provide evidence on the performance of…

  3. Diagnostics of a large-scale irregularity in the electron density near the boundary of the radio transparency frequency range of the ionosphere

    NASA Astrophysics Data System (ADS)

    Afanasiev, N. T.; Markov, V. P.

    2011-08-01

    Approximate functional relationships for the calculation of a disturbed transionogram with a trace deformation caused by the influence of a large-scale irregularity in the electron density are obtained. Numerical and asymptotic modeling of disturbed transionograms at various positions of a spacecraft relative to a ground-based observation point is performed. A possibility of the determination of the intensity and dimensions of a single large-scale irregularity near the boundary of the radio transparency frequency range of the ionosphere is demonstrated.

  4. Contribution of peculiar shear motions to large-scale structure

    NASA Technical Reports Server (NTRS)

    Mueler, Hans-Reinhard; Treumann, Rudolf A.

    1994-01-01

    Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.

  5. More robust regional precipitation projection from selected CMIP5 models based on multiple-dimensional metrics

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, L.; Leung, L. R.; Lin, G.; Lu, J.; Gao, Y.; Zhang, Y.

    2017-12-01

    Projecting precipitation changes is challenging because of incomplete understanding of the climate system and biases and uncertainty in climate models. In East Asia where summer precipitation is dominantly influenced by the monsoon circulation and the global models from Coupled Model Intercomparison Project Phase 5 (CMIP5), however, give various projection of precipitation change for 21th century. It is critical for community to know which models' projection are more reliable in response to natural and anthropogenic forcings. In this study we defined multiple-dimensional metrics, measuring the model performance in simulating the present-day of large-scale circulation, regional precipitation and relationship between them. The large-scale circulation features examined in this study include the lower tropospheric southwesterly winds, the western North Pacific subtropical high, the South China Sea Subtropical High, and the East Asian westerly jet in the upper troposphere. Each of these circulation features transport moisture to East Asia, enhancing the moist static energy and strengthening the Meiyu moisture front that is the primary mechanism for precipitation generation in eastern China. Based on these metrics, 30 models in CMIP5 ensemble are classified into three groups. Models in the top performing group projected regional precipitation patterns that are more similar to each other than the bottom or middle performing group and consistently projected statistically significant increasing trends in two of the large-scale circulation indices and precipitation. In contrast, models in the bottom or middle performing group projected small drying or no trends in precipitation. We also find the models that only reasonably reproduce the observed precipitation climatology does not guarantee more reliable projection of future precipitation because good simulation skill could be achieved through compensating errors from multiple sources. Herein the potential for more robust projections of precipitation changes at regional scale is demonstrated through the use of discriminating metric to subsample the multi-model ensemble. The results from this study provides insights for how to select models from CMIP ensemble to project regional climate and hydrological cycle changes.

  6. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    PubMed

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-06

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  7. Some aspects of wind tunnel magnetic suspension systems with special application at large physical scales

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1983-01-01

    Wind tunnel magnetic suspension and balance systems (MSBSs) have so far failed to find application at the large physical scales necessary for the majority of aerodynamic testing. Three areas of technology relevant to such application are investigated. Two variants of the Spanwise Magnet roll torque generation scheme are studied. Spanwise Permanent Magnets are shown to be practical and are experimentally demonstrated. Extensive computations of the performance of the Spanwise Iron Magnet scheme indicate powerful capability, limited principally be electromagnet technology. Aerodynamic testing at extreme attitudes is shown to be practical in relatively conventional MSBSs. Preliminary operation of the MSBS over a wide range of angles of attack is demonstrated. The impact of a requirement for highly reliable operation on the overall architecture of Large MSBSs is studied and it is concluded that system cost and complexity need not be seriously increased.

  8. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  9. Performance Studies on Distributed Virtual Screening

    PubMed Central

    Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.

    2014-01-01

    Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219

  10. Comparative performance of a genetically-encoded voltage indicator and a blue voltage sensitive dye for large scale cortical voltage imaging

    PubMed Central

    Mutoh, Hiroki; Mishina, Yukiko; Gallero-Salas, Yasir; Knöpfel, Thomas

    2015-01-01

    Traditional small molecule voltage sensitive dye indicators have been a powerful tool for monitoring large scale dynamics of neuronal activities but have several limitations including the lack of cell class specific targeting, invasiveness and difficulties in conducting longitudinal studies. Recent advances in the development of genetically-encoded voltage indicators have successfully overcome these limitations. Genetically-encoded voltage indicators (GEVIs) provide sufficient sensitivity to map cortical representations of sensory information and spontaneous network activities across cortical areas and different brain states. In this study, we directly compared the performance of a prototypic GEVI, VSFP2.3, with that of a widely used small molecule voltage sensitive dye (VSD), RH1691, in terms of their ability to resolve mesoscopic scale cortical population responses. We used three synchronized CCD cameras to simultaneously record the dual emission ratiometric fluorescence signal from VSFP2.3 and RH1691 fluorescence. The results show that VSFP2.3 offers more stable and less invasive recording conditions, while the signal-to-noise level and the response dynamics to sensory inputs are comparable to RH1691 recordings. PMID:25964738

  11. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  12. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    PubMed

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Aerodynamics of S809 Airfoil at Low and Transitional Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    Carreras, Jaime J.; Laal-Dehghani, Nader; Gorumlu, Serdar; Mehdi, Faraz; Castillo, Luciano; Aksak, Burak; Sheng, Jian

    2013-11-01

    The S809 is a thick airfoil extensively used in wind turbine design applications and model studies in wind tunnel. With increased interests in reducing energy production cost and understanding turbulence and turbine interactions, scaled down models (Re ~103) are often used as an alternative to full scale field experimentation (Re >106). This Reynolds number discrepancy raises the issue of scaling for the airfoil performance from laboratory studies to field scale applications. To the best of our knowledge, there are no studies existing in literature to characterize the lift- and drag-coefficients of S809 airfoil at Re less than 3 ×105 . This study is to fill the deficit in the current state of knowledge by performing high resolution force measurements. The lift and drag measurements are carried out in Texas Tech Wind Tunnel Facility using an in-house developed dual-cell force balance. The configuration eliminates the large torque and torsion often accompanied by conventional mounts. This unique design allows us to reach a measurement accuracy of 0.02N (0.1%). Comparative studies are performed on a two-dimensional airfoil with a smooth- as well as a well-engineered surface covered by micro-pillar array to simulate the surface conditions of a real life airfoil.

  14. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID

  15. The Application of Large-Scale Hypermedia Information Systems to Training.

    ERIC Educational Resources Information Center

    Crowder, Richard; And Others

    1995-01-01

    Discusses the use of hypermedia in electronic information systems that support maintenance operations in large-scale industrial plants. Findings show that after establishing an information system, the same resource base can be used to train personnel how to use the computer system and how to perform operational and maintenance tasks. (Author/JMV)

  16. Dynamics of the McDonnell Douglas Large Scale Dynamic Rig and Dynamic Calibration of the Rotor Balance

    DOT National Transportation Integrated Search

    1994-10-01

    A shake test was performed on the Large Scale Dynamic Rig in the 40- by 80-Foot Wind Tunnel in support of the McDonnell Douglas Advanced Rotor Technology (MDART) Test Program. The shake test identifies the hub modes and the dynamic calibration matrix...

  17. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  19. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  20. Large Field of View PIV Measurements of Air Entrainment by SLS SMAT Water Sound Suppression System

    NASA Astrophysics Data System (ADS)

    Stegmeir, Matthew; Pothos, Stamatios; Bissell, Dan

    2015-11-01

    Water-based sound suppressions systems have been used to reduce the acoustic impact of space vehicle launches. Water flows at a high rate during launch in order to suppress Engine Generated Acoustics and other potentially damaging sources of noise. For the Space Shuttle, peak flow rates exceeded 900,000 gallons per minute. Such large water flow rates have the potential to induce substantial entrainment of the surrounding air, affecting the launch conditions and generating airflow around the launch vehicle. Validation testing is necessary to quantify this impact for future space launch systems. In this study, PIV measurements were performed to map the flow field above the SMAT sub-scale launch vehicle scaled launch stand. Air entrainment effects generated by a water-based sound suppression system were studied. Mean and fluctuating fluid velocities were mapped up to 1m above the test stand deck and compared to simulation results. Measurements performed with NASA MSFC.

  1. The Segmented Aperture Interferometric Nulling Testbed (SAINT) I: Overview and Air-side System Description

    NASA Technical Reports Server (NTRS)

    Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter, III; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina; hide

    2016-01-01

    This work presents an overview of the This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes., a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.

  2. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Comments on "The Sensitivity Study of Radiative-Convective Equilibrium in the Tropics with a Convective Resolving Model"

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Shie, C.-L.; Simpson, J.

    2000-01-01

    In general, there are two broad scientific objectives when using cloud resolving models (CRMs or cloud ensemble models-CEMs) to study tropical convection. The first one is to use them as a physics resolving models to understand the dynamic and microphysical processes associated with the tropical water and energy cycles and their role in the climate system. The second approach is to use the CRMs to improve the representation of moist processes and their interaction with radiation in large-scale models. In order to improve the credibility of the CRMs and achieve the above goals, CRMs using identical initial conditions and large-scale influences need to produce very similar results. Two CRMs produced different statistical equilibrium (SE) states even though both used the same initial thermodynamic and wind conditions. Sensitivity tests to identify the major physical processes that determine the SE states for the different CRM simulations were performed. Their results indicated that atmospheric horizontal wind is treated quite differently in these two CRMs. The model that had stronger surface winds and consequently larger latent and sensible heat fluxes from the ocean produced a warmer and more humid modeled thermodynamic SE state. In addition, the domain mean thermodynamic state is more unstable for those experiments that produced a warmer and more humid SE state. Their simulated wet (warm and humid) SE states are thermally more stable in the lower troposphere (from the surface to 4-5 km in altitude). The large-scale horizontal advective effects on temperature and water vapor mixing ratio are needed when using CRMs to perform long-term integrations to study convective feedback under specified large-scale environments. In addition, it is suggested that the dry and cold SE state simulated was caused by enhanced precipitation but not enough surface evaporation. We find some problems with the interpretation of these three phenomena.

  4. Solutions of large-scale electromagnetics problems involving dielectric objects with the parallel multilevel fast multipole algorithm.

    PubMed

    Ergül, Özgür

    2011-11-01

    Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.

  5. Characterizing Long-Term Groundwater Conditions and Lithology for the Design of Large-Scale Borehole Heat Exchangers

    NASA Astrophysics Data System (ADS)

    Smith, David Charles

    Construction of large scale ground coupled heat pump (GCHP) systems that operate with hundreds or even thousands of boreholes for the borehole heat exchangers (BHE) has increased in recent years with many coming on line in the past 10 years. Many large institutions are constructing these systems because of their ability to store energy in the subsurface for indoor cooling during the warm summer months and extract that energy for heating during the cool winter months. Despite the increase in GCHP system systems constructed, there have been few long term studies on how these large systems interact with the subsurface. The thermal response test (TRT) is the industry standard for determining the thermal properties of the rock and soil. The TRT is limited in that it can only be used to determine the effective thermal conductivity over the whole length of a single borehole at the time that it is administered. The TRT cannot account for long-term changes in the aquifer saturation, changes in groundwater flow, or characterize different rock and soil units by effectiveness for heat storage. This study established new methods and also the need for the characterization of the subsurface for the purpose of design and long-term monitoring for GCHP systems. These new methods show that characterizing the long-term changes in aquifer saturation and groundwater flow, and characterizing different rock and soil units are an important part of the design and planning process of these systems. A greater understanding of how large-scale GCHP systems interact with the subsurface will result in designs that perform more efficiently over a longer period of time and expensive modifications due to unforeseen changes in system performance will be reduced.

  6. Cl-Assisted Large Scale Synthesis of Cm-Scale Buckypapers of Fe₃C-Filled Carbon Nanotubes with Pseudo-Capacitor Properties: The Key Role of SBA-16 Catalyst Support as Synthesis Promoter.

    PubMed

    Boi, Filippo S; He, Yi; Wen, Jiqiu; Wang, Shanling; Yan, Kai; Zhang, Jingdong; Medranda, Daniel; Borowiec, Joanna; Corrias, Anna

    2017-10-23

    We show a novel chemical vapour deposition (CVD) approach, in which the large-scale fabrication of ferromagnetically-filled cm-scale buckypapers is achieved through the deposition of a mesoporous supported catalyst (SBA-16) on a silicon substrate. We demonstrate that SBA-16 has the crucial role of promoting the growth of carbon nanotubes (CNTs) on a horizontal plane with random orientation rather than in a vertical direction, therefore allowing a facile fabrication of cm-scale CNTs buckypapers free from the onion-crust by-product observed on the buckypaper-surface in previous reports. The morphology and composition of the obtained CNTs-buckypapers are analyzed in detail by scanning electron microscopy (SEM), Energy Dispersive X-ray (EDX), transmission electron microscopy (TEM), high resolution TEM (HRTEM), and thermogravimetric analysis (TGA), while structural analysis is performed by Rietveld Refinement of XRD data. The room temperature magnetic properties of the produced buckypapers are also investigated and reveal the presence of a high coercivity of 650 Oe. Additionally, the electrochemical performances of these buckypapers are demonstrated and reveal a behavior that is compatible with that of a pseudo-capacitor (resistive-capacitor) with better performances than those presented in other previously studied layered-buckypapers of Fe-filled CNTs, obtained by pyrolysis of dichlorobenzene-ferrocene mixtures. These measurements indicate that these materials show promise for applications in energy storage systems as flexible electrodes.

  7. Evolutionary conservatism and convergence both lead to striking similarity in ecology, morphology and performance across continents in frogs.

    PubMed

    Moen, Daniel S; Irschick, Duncan J; Wiens, John J

    2013-12-22

    Many clades contain ecologically and phenotypically similar species across continents, yet the processes generating this similarity are largely unstudied, leaving fundamental questions unanswered. Is similarity in morphology and performance across assemblages caused by evolutionary convergence or by biogeographic dispersal of evolutionarily conserved ecotypes? Does convergence to new ecological conditions erase evidence of past adaptation? Here, we analyse ecology, morphology and performance in frog assemblages from three continents (Asia, Australia and South America), assessing the importance of dispersal and convergent evolution in explaining similarity across regions. We find three striking results. First, species using the same microhabitat type are highly similar in morphology and performance across both clades and continents. Second, some species on different continents owe their similarity to dispersal and evolutionary conservatism (rather than evolutionary convergence), even over vast temporal and spatial scales. Third, in one case, an ecologically specialized ancestor radiated into diverse ecotypes that have converged with those on other continents, largely erasing traces of past adaptation to their ancestral ecology. Overall, our study highlights the roles of both evolutionary conservatism and convergence in explaining similarity in species traits over large spatial and temporal scales and demonstrates a statistical framework for addressing these questions in other systems.

  8. Evolutionary conservatism and convergence both lead to striking similarity in ecology, morphology and performance across continents in frogs

    PubMed Central

    Moen, Daniel S.; Irschick, Duncan J.; Wiens, John J.

    2013-01-01

    Many clades contain ecologically and phenotypically similar species across continents, yet the processes generating this similarity are largely unstudied, leaving fundamental questions unanswered. Is similarity in morphology and performance across assemblages caused by evolutionary convergence or by biogeographic dispersal of evolutionarily conserved ecotypes? Does convergence to new ecological conditions erase evidence of past adaptation? Here, we analyse ecology, morphology and performance in frog assemblages from three continents (Asia, Australia and South America), assessing the importance of dispersal and convergent evolution in explaining similarity across regions. We find three striking results. First, species using the same microhabitat type are highly similar in morphology and performance across both clades and continents. Second, some species on different continents owe their similarity to dispersal and evolutionary conservatism (rather than evolutionary convergence), even over vast temporal and spatial scales. Third, in one case, an ecologically specialized ancestor radiated into diverse ecotypes that have converged with those on other continents, largely erasing traces of past adaptation to their ancestral ecology. Overall, our study highlights the roles of both evolutionary conservatism and convergence in explaining similarity in species traits over large spatial and temporal scales and demonstrates a statistical framework for addressing these questions in other systems. PMID:24174109

  9. A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence.

    PubMed

    Chumakov, Sergei G

    2008-09-01

    We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.

  10. Energy extraction from a large-scale microbial fuel cell system treating municipal wastewater

    NASA Astrophysics Data System (ADS)

    Ge, Zheng; Wu, Liao; Zhang, Fei; He, Zhen

    2015-11-01

    Development of microbial fuel cell (MFC) technology must address the challenges associated with energy extraction from large-scale MFC systems consisting of multiple modules. Herein, energy extraction is investigated with a 200-L MFC system (effective volume of 100 L for this study) treating actual municipal wastewater. A commercially available energy harvesting device (BQ 25504) is used successfully to convert 0.8-2.4 V from the MFCs to 5 V for charging ultracapacitors and running a DC motor. Four different types of serial connection containing different numbers of MFC modules are examined for energy extraction and conversion efficiency. The connection containing three rows of the MFCs has exhibited the best performance with the highest power output of ∼114 mW and the conversion efficiency of ∼80%. The weak performance of one-row MFCs negatively affects the overall performance of the connected MFCs in terms of both energy production and conversion. Those results indicate that an MFC system with balanced performance among individual modules will be critical to energy extraction. Future work will focus on application of the extracted energy to support MFC operation.

  11. Diagnostic performance of an indirect enzyme-linked immunosorbent assay (ELISA) to detect bovine leukemia virus antibodies in bulk-tank milk samples.

    PubMed

    Nekouei, Omid; Durocher, Jean; Keefe, Greg

    2016-07-01

    This study assessed the diagnostic performance of a commercial ELISA for detecting bovine leukemia virus antibodies in bulk-tank milk samples from eastern Canada. Sensitivity and specificity of the test were estimated at 97.2% and 100%, respectively. The test was recommended as a cost-efficient tool for large-scale screening programs.

  12. Diagnostic performance of an indirect enzyme-linked immunosorbent assay (ELISA) to detect bovine leukemia virus antibodies in bulk-tank milk samples

    PubMed Central

    Nekouei, Omid; Durocher, Jean; Keefe, Greg

    2016-01-01

    This study assessed the diagnostic performance of a commercial ELISA for detecting bovine leukemia virus antibodies in bulk-tank milk samples from eastern Canada. Sensitivity and specificity of the test were estimated at 97.2% and 100%, respectively. The test was recommended as a cost-efficient tool for large-scale screening programs. PMID:27429469

  13. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  14. Design Sketches For Optical Crossbar Switches Intended For Large-Scale Parallel Processing Applications

    NASA Astrophysics Data System (ADS)

    Hartmann, Alfred; Redfield, Steve

    1989-04-01

    This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.

  15. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  16. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  17. Outlook and Challenges of Perovskite Solar Cells toward Terawatt-Scale Photovoltaic Module Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Kai; Kim, Donghoe; Whitaker, James B

    Rapid development of perovskite solar cells (PSCs) during the past several years has made this photovoltaic (PV) technology a serious contender for potential large-scale deployment on the terawatt scale in the PV market. To successfully transition PSC technology from the laboratory to industry scale, substantial efforts need to focus on scalable fabrication of high-performance perovskite modules with minimum negative environmental impact. Here, we provide an overview of the current research and our perspective regarding PSC technology toward future large-scale manufacturing and deployment. Several key challenges discussed are (1) a scalable process for large-area perovskite module fabrication; (2) less hazardous chemicalmore » routes for PSC fabrication; and (3) suitable perovskite module designs for different applications.« less

  18. Coloration mechanisms and phylogeny of Morpho butterflies.

    PubMed

    Giraldo, M A; Yoshioka, S; Liu, C; Stavenga, D G

    2016-12-15

    Morpho butterflies are universally admired for their iridescent blue coloration, which is due to nanostructured wing scales. We performed a comparative study on the coloration of 16 Morpho species, investigating the morphological, spectral and spatial scattering properties of the differently organized wing scales. In numerous previous studies, the bright blue Morpho coloration has been fully attributed to the multi-layered ridges of the cover scales' upper laminae, but we found that the lower laminae of the cover and ground scales play an important additional role, by acting as optical thin film reflectors. We conclude that Morpho coloration is a subtle combination of overlapping pigmented and/or unpigmented scales, multilayer systems, optical thin films and sometimes undulated scale surfaces. Based on the scales' architecture and their organization, five main groups can be distinguished within the genus Morpho, largely agreeing with the accepted phylogeny. © 2016. Published by The Company of Biologists Ltd.

  19. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    PubMed

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  20. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementationmore » of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.« less

  2. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    PubMed

    Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  3. Feeding biomechanics and theoretical calculations of bite force in bull sharks (Carcharhinus leucas) during ontogeny.

    PubMed

    Habegger, Maria L; Motta, Philip J; Huber, Daniel R; Dean, Mason N

    2012-12-01

    Evaluations of bite force, either measured directly or calculated theoretically, have been used to investigate the maximum feeding performance of a wide variety of vertebrates. However, bite force studies of fishes have focused primarily on small species due to the intractable nature of large apex predators. More massive muscles can generate higher forces and many of these fishes attain immense sizes; it is unclear how much of their biting performance is driven purely by dramatic ontogenetic increases in body size versus size-specific selection for enhanced feeding performance. In this study, we investigated biting performance and feeding biomechanics of immature and mature individuals from an ontogenetic series of an apex predator, the bull shark, Carcharhinus leucas (73-285cm total length). Theoretical bite force ranged from 36 to 2128N at the most anterior bite point, and 170 to 5914N at the most posterior bite point over the ontogenetic series. Scaling patterns differed among the two age groups investigated; immature bull shark bite force scaled with positive allometry, whereas adult bite force scaled isometrically. When the bite force of C. leucas was compared to those of 12 other cartilaginous fishes, bull sharks presented the highest mass-specific bite force, greater than that of the white shark or the great hammerhead shark. A phylogenetic independent contrast analysis of anatomical and dietary variables as determinants of bite force in these 13 species indicated that the evolution of large adult bite forces in cartilaginous fishes is linked predominantly to the evolution of large body size. Multiple regressions based on mass-specific standardized contrasts suggest that the evolution of high bite forces in Chondrichthyes is further correlated with hypertrophication of the jaw adductors, increased leverage for anterior biting, and widening of the head. Lastly, we discuss the ecological significance of positive allometry in bite force as a possible "performance gain" early in the life history of C. leucas. Copyright © 2012 Elsevier GmbH. All rights reserved.

  4. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  5. Study of Microburst Detection Performance during 1985 in Memphis, Tennessee.

    DTIC Science & Technology

    1987-08-05

    downburst into two categories depending on the outbursts’ hori- zontal scale: 1) macroburst - a large downburst with its’ outburst winds extending in... Macroburst . University of Chicago, 122 pp. Merritt, M.W., 1987: Microburst Divergent Outflow Algorithm, Version 2. MIT Lincoln Laboratory Weather Radar

  6. Detecting natural occlusion boundaries using local cues

    PubMed Central

    DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.

    2012-01-01

    Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731

  7. Real-time mapping of the corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy

    NASA Astrophysics Data System (ADS)

    Guthoff, Rudolf F.; Zhivov, Andrey; Stachs, Oliver

    2010-02-01

    The aim of the study was to produce two-dimensional reconstruction maps of the living corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy in real time. CLSM source data (frame rate 30Hz, 384x384 pixel) were used to create large-scale maps of the scanned area by selecting the Automatic Real Time (ART) composite mode. The mapping algorithm is based on an affine transformation. Microscopy of the sub-basal nerve plexus was performed on normal and LASIK eyes as well as on rabbit eyes. Real-time mapping of the sub-basal nerve plexus was performed in large-scale up to a size of 3.2mm x 3.2mm. The developed method enables a real-time in vivo mapping of the sub-basal nerve plexus which is stringently necessary for statistically firmed conclusions about morphometric plexus alterations.

  8. System design and integration of the large-scale advanced prop-fan

    NASA Technical Reports Server (NTRS)

    Huth, B. P.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that blades with thin airfoils and aerodynamic sweep extend the inherent efficiency advantage that turboprop propulsion systems have demonstrated to the higher speed to today's aircraft. Hamilton Standard has designed a 9-foot diameter single-rotation Prop-Fan. It will test the hardware on a static test stand, in low speed and high speed wind tunnels and on a research aircraft. The major objective of this testing is to establish the structural integrity of large scale Prop-Fans of advanced construction, in addition to the evaluation of aerodynamic performance and the aeroacoustic design. The coordination efforts performed to ensure smooth operation and assembly of the Prop-Fan are summarized. A summary of the loads used to size the system components, the methodology used to establish material allowables and a review of the key analytical results are given.

  9. Basic numerical competences in large-scale assessment data: Structure and long-term relevance.

    PubMed

    Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian

    2018-03-01

    Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Fire extinguishing tests -80 with methyl alcohol gasoline (in MIXED)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmstedt, G.; Ryderman, A.; Carlsson, B.

    1980-01-01

    Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15more » fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results, and performances with small pools can hardly be correlated with results from large scale fires.« less

  11. Investigating the Potential of Deep Neural Networks for Large-Scale Classification of Very High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.

    2017-05-01

    Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.

  12. Examination of Daily Weather in the NCAR CCM

    NASA Astrophysics Data System (ADS)

    Cocke, S. D.

    2006-05-01

    The NCAR CCM is one of the most extensively studied climate models in the scientific community. However, most studies focus primarily on the long term mean behavior, typically monthly or longer time scales. In this study we examine the daily weather in the GCM by performing a series of daily or weekly 10 day forecasts for one year at moderate (T63) and high (T126) resolution. The model is initialized with operational "AVN" and ECMWF analyses, and model performance is compared to that of major operational centers, using conventional skill scores used by the major centers. Such a detailed look at the CCM at shorter time scales may lead to improvements in physical parameterizations, which may in turn lead to improved climate simulations. One finding from this study is that the CCM has a significant drying tendency in the lower troposphere compared to the operational analyses. Another is that the large scale predictability of the GCM is competitive with most of the operational models, particularly in the southern hemisphere.

  13. Using Data Mining to Predict K-12 Students' Performance on Large-Scale Assessment Items Related to Energy

    ERIC Educational Resources Information Center

    Liu, Xiufeng; Ruiz, Miguel E.

    2008-01-01

    This article reports a study on using data mining to predict K-12 students' competence levels on test items related to energy. Data sources are the 1995 Third International Mathematics and Science Study (TIMSS), 1999 TIMSS-Repeat, 2003 Trend in International Mathematics and Science Study (TIMSS), and the National Assessment of Educational…

  14. A comprehensive evaluation of two MODIS evapotranspiration products over the conterminous United States: using point and gridded FLUXNET and water balance ET

    USGS Publications Warehouse

    Velpuri, Naga M.; Senay, Gabriel B.; Singh, Ramesh K.; Bohms, Stefanie; Verdin, James P.

    2013-01-01

    Remote sensing datasets are increasingly being used to provide spatially explicit large scale evapotranspiration (ET) estimates. Extensive evaluation of such large scale estimates is necessary before they can be used in various applications. In this study, two monthly MODIS 1 km ET products, MODIS global ET (MOD16) and Operational Simplified Surface Energy Balance (SSEBop) ET, are validated over the conterminous United States at both point and basin scales. Point scale validation was performed using eddy covariance FLUXNET ET (FLET) data (2001–2007) aggregated by year, land cover, elevation and climate zone. Basin scale validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various hydrologic unit code (HUC) levels. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products showed overall comparable annual accuracies. For most land cover types, both ET products showed comparable results. However, SSEBop showed higher performance for Grassland and Forest classes; MOD16 showed improved performance in the Woody Savanna class. Accuracy of both the ET products was also found to be comparable over different climate zones. However, SSEBop data showed higher skill score across the climate zones covering the western United States. Validation results at different HUC levels over 2000–2011 using GFET as a reference indicate higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000–2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at different HUC levels. Our results indicate that both MODIS ET products effectively reproduced basin scale ET response (up to 25% uncertainty) compared to CONUS-wide point-based ET response (up to 50–60% uncertainty) illustrating the reliability of MODIS ET products for basin-scale ET estimation. Results from this research would guide the additional parameter refinement required for the MOD16 and SSEBop algorithms in order to further improve their accuracy and performance for agro-hydrologic applications.

  15. Large-Scale Phase Synchrony Reflects Clinical Status After Stroke: An EEG Study.

    PubMed

    Kawano, Teiji; Hattori, Noriaki; Uno, Yutaka; Kitajo, Keiichi; Hatakenaka, Megumi; Yagura, Hajime; Fujimoto, Hiroaki; Yoshioka, Tomomi; Nagasako, Michiko; Otomune, Hironori; Miyai, Ichiro

    2017-06-01

    Stroke-induced focal brain lesions often exert remote effects via residual neural network activity. Electroencephalographic (EEG) techniques can assess neural network modifications after brain damage. Recently, EEG phase synchrony analyses have shown associations between the level of large-scale phase synchrony of brain activity and clinical symptoms; however, few reports have assessed such associations in stroke patients. The aim of this study was to investigate the clinical relevance of hemispheric phase synchrony in stroke patients by calculating its correlation with clinical status. This cross-sectional study included 19 patients with post-acute ischemic stroke admitted for inpatient rehabilitation. Interhemispheric phase synchrony indices (IH-PSIs) were computed in 2 frequency bands (alpha [α], and beta [β]), and associations between indices and scores of the Functional Independence Measure (FIM), the National Institutes of Health Stroke Scale (NIHSS), and the Fugl-Meyer Motor Assessment (FMA) were analyzed. For further assessments of IH-PSIs, ipsilesional intrahemispheric PSIs (IntraH-PSIs) as well as IH- and IntraH-phase lag indices (PLIs) were also evaluated. IH-PSIs correlated significantly with FIM scores and NIHSS scores. In contrast, IH-PSIs did not correlate with FMA scores. IntraH-PSIs correlate with FIM scores after removal of the outlier. The results of analysis with PLIs were consistent with IH-PSIs. The PSIs correlated with performance on the activities of daily living scale but not with scores on a pure motor impairment scale. These results suggest that large-scale phase synchrony represented by IH-PSIs provides a novel surrogate marker for clinical status after stroke.

  16. Compromising genetic diversity in the wild: Unmonitored large-scale release of plants and animals

    Treesearch

    Linda Laikre; Michael K. Schwartz; Robin S. Waples; Nils Ryman; F. W. Allendorf; C. S. Baker; D. P. Gregovich; M. M. Hansen; J. A. Jackson; K. C. Kendall; K. McKelvey; M. C. Neel; I. Olivieri; R. Short Bull; J. B. Stetz; D. A. Tallmon; C. D. Vojta; D. M. Waller

    2010-01-01

    Large-scale exploitation of wild animals and plants through fishing, hunting and logging often depends on augmentation through releases of translocated or captively raised individuals. Such releases are performed worldwide in vast numbers. Augmentation can be demographically and economically beneficial but can also cause four types of adverse genetic change to wild...

  17. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Construction of large scale switch matrix by interconnecting integrated optical switch chips with EDFAs

    NASA Astrophysics Data System (ADS)

    Liao, Mingle; Wu, Baojian; Hou, Jianhong; Qiu, Kun

    2018-03-01

    Large scale optical switches are essential components in optical communication network. We aim to build up a large scale optical switch matrix by the interconnection of silicon-based optical switch chips using 3-stage CLOS structure, where EDFAs are needed to compensate for the insertion loss of the chips. The optical signal-to-noise ratio (OSNR) performance of the resulting large scale optical switch matrix is investigated for TE-mode light and the experimental results are in agreement with the theoretical analysis. We build up a 64 ×64 switch matrix by use of 16 ×16 optical switch chips and the OSNR and receiver sensibility can respectively be improved by 0.6 dB and 0.2 dB by optimizing the gain configuration of the EDFAs.

  19. PISA Lends Legitimacy: A Study of Education Policy Changes in Germany and Sweden after 2000

    ERIC Educational Resources Information Center

    Ringarp, Johanna

    2016-01-01

    School issues have become increasingly important in public elections and political debates, leading to increased focus on the results students achieve in international large-scale assessments and in the rankings of the involved countries. One of the most important studies of scholastic performance is the Programme for International Student…

  20. Screening for Language Delay: Growth Trajectories of Language Ability in Low- and High-Performing Children

    ERIC Educational Resources Information Center

    Klem, Marianne; Hagtvet, Bente; Hulme, Charles; Gustafsson, Jan-Eric

    2016-01-01

    Purpose: This study investigated the stability and growth of preschool language skills and explores latent class analysis as an approach for identifying children at risk of language impairment. Method: The authors present data from a large-scale 2-year longitudinal study, in which 600 children were assessed with a language-screening tool…

  1. A Hybrid, Large-Scale Wireless Sensor Network for Real-Time Acquisition and Tracking

    DTIC Science & Technology

    2007-06-01

    multicolor, Quantum Well Infrared Photodetector ( QWIP ), step-stare, large-format Focal Plane Array (FPA) is proposed and evaluated through performance...Photodetector ( QWIP ), step-stare, large-format Focal Plane Array (FPA) is proposed and evaluated through performance analysis. The thesis proposes...7 1. Multi-color IR Sensors - Operational Advantages ...........................8 2. Quantum-Well IR Photodetector ( QWIP

  2. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  3. A simulation study demonstrating the importance of large-scale trailing vortices in wake steering

    DOE PAGES

    Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew; ...

    2018-05-14

    In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less

  4. A simulation study demonstrating the importance of large-scale trailing vortices in wake steering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew

    In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less

  5. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia

  6. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  7. On mechanics and material length scales of failure in heterogeneous interfaces using a finite strain high performance solver

    NASA Astrophysics Data System (ADS)

    Mosby, Matthew; Matouš, Karel

    2015-12-01

    Three-dimensional simulations capable of resolving the large range of spatial scales, from the failure-zone thickness up to the size of the representative unit cell, in damage mechanics problems of particle reinforced adhesives are presented. We show that resolving this wide range of scales in complex three-dimensional heterogeneous morphologies is essential in order to apprehend fracture characteristics, such as strength, fracture toughness and shape of the softening profile. Moreover, we show that computations that resolve essential physical length scales capture the particle size-effect in fracture toughness, for example. In the vein of image-based computational materials science, we construct statistically optimal unit cells containing hundreds to thousands of particles. We show that these statistically representative unit cells are capable of capturing the first- and second-order probability functions of a given data-source with better accuracy than traditional inclusion packing techniques. In order to accomplish these large computations, we use a parallel multiscale cohesive formulation and extend it to finite strains including damage mechanics. The high-performance parallel computational framework is executed on up to 1024 processing cores. A mesh convergence and a representative unit cell study are performed. Quantifying the complex damage patterns in simulations consisting of tens of millions of computational cells and millions of highly nonlinear equations requires data-mining the parallel simulations, and we propose two damage metrics to quantify the damage patterns. A detailed study of volume fraction and filler size on the macroscopic traction-separation response of heterogeneous adhesives is presented.

  8. Effects of multiple-scale driving on turbulence statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Hyunju; Cho, Jungyeon, E-mail: hyunju527@gmail.com, E-mail: jcho@cnu.ac.kr

    2014-01-01

    Turbulence is ubiquitous in astrophysical fluids such as the interstellar medium and the intracluster medium. In turbulence studies, it is customary to assume that fluid is driven on a single scale. However, in astrophysical fluids, there can be many different driving mechanisms that act on different scales. If there are multiple energy-injection scales, the process of energy cascade and turbulence dynamo will be different compared with the case of the single energy-injection scale. In this work, we perform three-dimensional incompressible/compressible magnetohydrodynamic turbulence simulations. We drive turbulence in Fourier space in two wavenumber ranges, 2≤k≤√12 (large scale) and 15 ≲ kmore » ≲ 26 (small scale). We inject different amount of energy in each range by changing the amplitude of forcing in the range. We present the time evolution of the kinetic and magnetic energy densities and discuss the turbulence dynamo in the presence of energy injections at two scales. We show how kinetic, magnetic, and density spectra are affected by the two-scale energy injections and we discuss the observational implications. In the case ε {sub L} < ε {sub S}, where ε {sub L} and ε {sub S} are energy-injection rates at the large and small scales, respectively, our results show that even a tiny amount of large-scale energy injection can significantly change the properties of turbulence. On the other hand, when ε {sub L} ≳ ε {sub S}, the small-scale driving does not influence the turbulence statistics much unless ε {sub L} ∼ ε {sub S}.« less

  9. Interlaboratory Study Characterizing a Yeast Performance Standard for Benchmarking LC-MS Platform Performance*

    PubMed Central

    Paulovich, Amanda G.; Billheimer, Dean; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo; Rudnick, Paul A.; Tabb, David L.; Wang, Pei; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Clauser, Karl R.; Kinsinger, Christopher R.; Schilling, Birgit; Tegeler, Tony J.; Variyath, Asokan Mulayath; Wang, Mu; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Fenyo, David; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Mesri, Mehdi; Neubert, Thomas A.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Stein, Stephen E.; Tempst, Paul; Liebler, Daniel C.

    2010-01-01

    Optimal performance of LC-MS/MS platforms is critical to generating high quality proteomics data. Although individual laboratories have developed quality control samples, there is no widely available performance standard of biological complexity (and associated reference data sets) for benchmarking of platform performance for analysis of complex biological proteomes across different laboratories in the community. Individual preparations of the yeast Saccharomyces cerevisiae proteome have been used extensively by laboratories in the proteomics community to characterize LC-MS platform performance. The yeast proteome is uniquely attractive as a performance standard because it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins. In this study, we describe a standard operating protocol for large scale production of the yeast performance standard and offer aliquots to the community through the National Institute of Standards and Technology where the yeast proteome is under development as a certified reference material to meet the long term needs of the community. Using a series of metrics that characterize LC-MS performance, we provide a reference data set demonstrating typical performance of commonly used ion trap instrument platforms in expert laboratories; the results provide a basis for laboratories to benchmark their own performance, to improve upon current methods, and to evaluate new technologies. Additionally, we demonstrate how the yeast reference, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix, thereby providing a metric to evaluate and minimize preanalytical and analytical variation in comparative proteomics experiments. PMID:19858499

  10. Psychosocial correlates of fatigue in multiple sclerosis.

    PubMed

    Schwartz, C E; Coulthard-Morris, L; Zeng, Q

    1996-02-01

    To explore: (1) the interrelation among the neuropsychological, psychological, and psychosocial factors and fatigue as measured by the Multidimensional Assessment of Fatigue scale, and (2) the impact of fatigue on role performance. Clinical interview with neuropsychological testing and cross-sectional study by mail. Multiple sclerosis (MS) clinic registry of a large Boston teaching hospital. 139 MS patients representing a broad range of disability. The Multidimensional Assessment of Fatigue (MAF) scale, the Extended Disability Status Scale, the Sickness Impact Profile, Rao cognitive battery, the Trailmaking Test, depression, anxiety, and social activity limitations subscales from the Arthritis Impact Measurement Scales, and the Ryff Happiness Scale. Stepwise multiple regression analyses revealed that having a low sense of environmental mastery was the best psychosocial predictor of both global fatigue and fatigue-related distress, after adjusting for sociodemographic and medical factors. Further, people who reported being more depressed tended to report more severe fatigue. Neuropsychological performance was not associated with fatigue. Fatigue was found to limit social, work, and overall role performance, but not physical role performance. People who feel that they can choose or create environments suitable to their psychic or physical conditions report less global fatigue and less fatigue-related distress, and fatigue can have an important impact on role performance. The implications of these findings for designing fatigue management interventions are discussed.

  11. Ice Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy; Potapczuk, Mark; Lee, Sam; Malone, Adam; Paul, Ben; Woodard, Brian

    2016-01-01

    The design and certification of modern transport airplanes for flight in icing conditions increasing relies on three-dimensional numerical simulation tools for ice accretion prediction. There is currently no publically available, high-quality, ice accretion database upon which to evaluate the performance of icing simulation tools for large-scale swept wings that are representative of modern commercial transport airplanes. The purpose of this presentation is to present the results of a series of icing wind tunnel test campaigns whose aim was to provide an ice accretion database for large-scale, swept wings.

  12. Using Hybrid Techniques for Generating Watershed-scale Flood Models in an Integrated Modeling Framework

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Merwade, V.; Singhofen, P.

    2017-12-01

    There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.

  13. Beam-Beam Study on the Upgrade of Beijing Electron Positron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S.; /Beijing, Inst. High Energy Phys.; Cai, Y.

    2006-02-10

    It is an important issue to study the beam-beam interaction in the design and performance of such a high luminosity collider as BEPCII, the upgrade of Beijing Electron Positron Collider. The weak-strong simulation is generally used during the design of a collider. For performance a large scale tune scan, the weak-strong simulation studies on beam-beam interaction were done, and the geometry effects were taken into account. The strong-strong simulation studies were done for investigating the luminosity goal and the dependence of the luminosity on the beam parameters.

  14. Assessing the influence of rater and subject characteristics on measures of agreement for ordinal ratings.

    PubMed

    Nelson, Kerrie P; Mitani, Aya A; Edwards, Don

    2017-09-10

    Widespread inconsistencies are commonly observed between physicians' ordinal classifications in screening tests results such as mammography. These discrepancies have motivated large-scale agreement studies where many raters contribute ratings. The primary goal of these studies is to identify factors related to physicians and patients' test results, which may lead to stronger consistency between raters' classifications. While ordered categorical scales are frequently used to classify screening test results, very few statistical approaches exist to model agreement between multiple raters. Here we develop a flexible and comprehensive approach to assess the influence of rater and subject characteristics on agreement between multiple raters' ordinal classifications in large-scale agreement studies. Our approach is based upon the class of generalized linear mixed models. Novel summary model-based measures are proposed to assess agreement between all, or a subgroup of raters, such as experienced physicians. Hypothesis tests are described to formally identify factors such as physicians' level of experience that play an important role in improving consistency of ratings between raters. We demonstrate how unique characteristics of individual raters can be assessed via conditional modes generated during the modeling process. Simulation studies are presented to demonstrate the performance of the proposed methods and summary measure of agreement. The methods are applied to a large-scale mammography agreement study to investigate the effects of rater and patient characteristics on the strength of agreement between radiologists. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  16. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  17. A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, Laura; Jakob, Christian; Cheung, K.

    2013-06-27

    Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less

  18. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  19. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    PubMed

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  20. Attribution of Large-Scale Climate Patterns to Seasonal Peak-Flow and Prospects for Prediction Globally

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Ward, Philip; Block, Paul

    2018-02-01

    Flood-related fatalities and impacts on society surpass those from all other natural disasters globally. While the inclusion of large-scale climate drivers in streamflow (or high-flow) prediction has been widely studied, an explicit link to global-scale long-lead prediction is lacking, which can lead to an improved understanding of potential flood propensity. Here we attribute seasonal peak-flow to large-scale climate patterns, including the El Niño Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), and Atlantic Multidecadal Oscillation (AMO), using streamflow station observations and simulations from PCR-GLOBWB, a global-scale hydrologic model. Statistically significantly correlated climate patterns and streamflow autocorrelation are subsequently applied as predictors to build a global-scale season-ahead prediction model, with prediction performance evaluated by the mean squared error skill score (MSESS) and the categorical Gerrity skill score (GSS). Globally, fair-to-good prediction skill (20% ≤ MSESS and 0.2 ≤ GSS) is evident for a number of locations (28% of stations and 29% of land area), most notably in data-poor regions (e.g., West and Central Africa). The persistence of such relevant climate patterns can improve understanding of the propensity for floods at the seasonal scale. The prediction approach developed here lays the groundwork for further improving local-scale seasonal peak-flow prediction by identifying relevant global-scale climate patterns. This is especially attractive for regions with limited observations and or little capacity to develop flood early warning systems.

  1. Distributed coaxial cable crack sensors for crack mapping in RC

    NASA Astrophysics Data System (ADS)

    Greene, Gary G.; Belarbi, Abdeldjelil; Chen, Genda; McDaniel, Ryan

    2005-05-01

    New type of distributed coaxial cable sensors for health monitoring of large-scale civil infrastructure was recently proposed and developed by the authors. This paper shows the results and performance of such sensors mounted on near surface of two flexural beams and a large scale reinforced concrete box girder that was subjected to twenty cycles of combined shear and torsion. The main objectives of this health monitoring study was to correlate the sensor's response to strain in the member, and show that magnitude of the signal's reflection coefficient is related to increases in applied load, repeated cycles, cracking, crack mapping, and yielding. The effect of multiple adjacent cracks, and signal loss was also investigated.

  2. The cosmic ray muon tomography facility based on large scale MRPC detectors

    NASA Astrophysics Data System (ADS)

    Wang, Xuewu; Zeng, Ming; Zeng, Zhi; Wang, Yi; Zhao, Ziran; Yue, Xiaoguang; Luo, Zhifei; Yi, Hengguan; Yu, Baihui; Cheng, Jianping

    2015-06-01

    Cosmic ray muon tomography is a novel technology to detect high-Z material. A prototype of TUMUTY with 73.6 cm×73.6 cm large scale position sensitive MRPC detectors has been developed and is introduced in this paper. Three test kits have been tested and image is reconstructed using MAP algorithm. The reconstruction results show that the prototype is working well and the objects with complex structure and small size (20 mm) can be imaged on it, while the high-Z material is distinguishable from the low-Z one. This prototype provides a good platform for our further studies of the physical characteristics and the performances of cosmic ray muon tomography.

  3. Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.

    PubMed

    Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil

    2005-03-01

    We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.

  4. Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations

    NASA Astrophysics Data System (ADS)

    Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.

    2016-07-01

    Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.

  5. Cascading failure in the wireless sensor scale-free networks

    NASA Astrophysics Data System (ADS)

    Liu, Hao-Ran; Dong, Ming-Ru; Yin, Rong-Rong; Han, Li

    2015-05-01

    In the practical wireless sensor networks (WSNs), the cascading failure caused by a failure node has serious impact on the network performance. In this paper, we deeply research the cascading failure of scale-free topology in WSNs. Firstly, a cascading failure model for scale-free topology in WSNs is studied. Through analyzing the influence of the node load on cascading failure, the critical load triggering large-scale cascading failure is obtained. Then based on the critical load, a control method for cascading failure is presented. In addition, the simulation experiments are performed to validate the effectiveness of the control method. The results show that the control method can effectively prevent cascading failure. Project supported by the Natural Science Foundation of Hebei Province, China (Grant No. F2014203239), the Autonomous Research Fund of Young Teacher in Yanshan University (Grant No. 14LGB017) and Yanshan University Doctoral Foundation, China (Grant No. B867).

  6. Attention/Vigilance in Schizophrenia: Performance Results from a Large Multi-Site Study of the Consortium on the Genetics of Schizophrenia (COGS)

    PubMed Central

    Nuechterlein, Keith H.; Green, Michael F.; Calkins, Monica E.; Greenwood, Tiffany A.; Gur, Raquel E.; Gur, Ruben C.; Lazzeroni, Laura C.; Light, Gregory A.; Radant, Allen D.; Seidman, Larry J.; Siever, Larry J.; Silverman, Jeremy M.; Sprock, Joyce; Stone, William S.; Sugar, Catherine A.; Swerdlow, Neal R.; Tsuang, Debby W.; Tsuang, Ming T.; Turetsky, Bruce I.; Braff, David L.

    2015-01-01

    Attention/vigilance impairments are present in individuals with schizophrenia across psychotic and remitted states and in their first-degree relatives. An important question is whether deficits in attention/vigilance can be consistently and reliably measured across sites varying in many participant demographic, clinical, and functional characteristics, as needed for large-scale genetic studies of endophenotypes. We examined Continuous Performance Test (CPT) data from Phase 2 of the Consortium on the Genetics of Schizophrenia (COGS-2), the largest-scale assessment of cognitive and psychophysiological endophenotypes relevant to schizophrenia. CPT data from 2251 participants from five sites were examined. A perceptual-load vigilance task (the Degraded Stimulus CPT or DS-CPT) and a memory-load vigilance task (CPT - Identical Pairs or CPT-IP) were utilized. Schizophrenia patients performed more poorly than healthy comparison subjects (HCS) across sites, despite significant site differences in participant age, sex, education, and racial distribution. Patient-HCS differences in signal/noise discrimination (d’) in the DS-CPT varied significantly across sites, but averaged a medium effect size. CPT-IP performance showed large patient-HCS differences across sites. Poor CPT performance was independent of or weakly correlated with symptom severity, but was significantly associated with lower educational achievement and functional capacity. Current smoking was associated with poorer CPT-IP d’. Patients taking both atypical and typical antipsychotic medication performed more poorly than those on no or atypical antipsychotic medications, likely reflecting their greater severity of illness. We conclude that CPT deficits in schizophrenia can be reliably detected across sites, are relatively independent of current symptom severity, and are related to functional capacity. PMID:25749017

  7. Attention/vigilance in schizophrenia: performance results from a large multi-site study of the Consortium on the Genetics of Schizophrenia (COGS).

    PubMed

    Nuechterlein, Keith H; Green, Michael F; Calkins, Monica E; Greenwood, Tiffany A; Gur, Raquel E; Gur, Ruben C; Lazzeroni, Laura C; Light, Gregory A; Radant, Allen D; Seidman, Larry J; Siever, Larry J; Silverman, Jeremy M; Sprock, Joyce; Stone, William S; Sugar, Catherine A; Swerdlow, Neal R; Tsuang, Debby W; Tsuang, Ming T; Turetsky, Bruce I; Braff, David L

    2015-04-01

    Attention/vigilance impairments are present in individuals with schizophrenia across psychotic and remitted states and in their first-degree relatives. An important question is whether deficits in attention/vigilance can be consistently and reliably measured across sites varying in many participant demographic, clinical, and functional characteristics, as needed for large-scale genetic studies of endophenotypes. We examined Continuous Performance Test (CPT) data from phase 2 of the Consortium on the Genetics of Schizophrenia (COGS-2), the largest-scale assessment of cognitive and psychophysiological endophenotypes relevant to schizophrenia. The CPT data from 2251 participants from five sites were examined. A perceptual-load vigilance task (the Degraded Stimulus CPT or DS-CPT) and a memory-load vigilance task (CPT-Identical Pairs or CPT-IP) were utilized. Schizophrenia patients performed more poorly than healthy comparison subjects (HCS) across sites, despite significant site differences in participant age, sex, education, and racial distribution. Patient-HCS differences in signal/noise discrimination (d') in the DS-CPT varied significantly across sites, but averaged a medium effect size. CPT-IP performance showed large patient-HCS differences across sites. Poor CPT performance was independent of or weakly correlated with symptom severity, but was significantly associated with lower educational achievement and functional capacity. Current smoking was associated with poorer CPT-IP d'. Patients taking both atypical and typical antipsychotic medication performed more poorly than those on no or atypical antipsychotic medications, likely reflecting their greater severity of illness. We conclude that CPT deficits in schizophrenia can be reliably detected across sites, are relatively independent of current symptom severity, and are related to functional capacity. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Beyond Scale-Free Small-World Networks: Cortical Columns for Quick Brains

    NASA Astrophysics Data System (ADS)

    Stoop, Ralph; Saase, Victor; Wagner, Clemens; Stoop, Britta; Stoop, Ruedi

    2013-03-01

    We study to what extent cortical columns with their particular wiring boost neural computation. Upon a vast survey of columnar networks performing various real-world cognitive tasks, we detect no signs of enhancement. It is on a mesoscopic—intercolumnar—scale that the existence of columns, largely irrespective of their inner organization, enhances the speed of information transfer and minimizes the total wiring length required to bind distributed columnar computations towards spatiotemporally coherent results. We suggest that brain efficiency may be related to a doubly fractal connectivity law, resulting in networks with efficiency properties beyond those by scale-free networks.

  9. Investigations of grain size dependent sediment transport phenomena on multiple scales

    NASA Astrophysics Data System (ADS)

    Thaxton, Christopher S.

    Sediment transport processes in coastal and fluvial environments resulting from disturbances such as urbanization, mining, agriculture, military operations, and climatic change have significant impact on local, regional, and global environments. Primarily, these impacts include the erosion and deposition of sediment, channel network modification, reduction in downstream water quality, and the delivery of chemical contaminants. The scale and spatial distribution of these effects are largely attributable to the size distribution of the sediment grains that become eligible for transport. An improved understanding of advective and diffusive grain-size dependent sediment transport phenomena will lead to the development of more accurate predictive models and more effective control measures. To this end, three studies were performed that investigated grain-size dependent sediment transport on three different scales. Discrete particle computer simulations of sheet flow bedload transport on the scale of 0.1--100 millimeters were performed on a heterogeneous population of grains of various grain sizes. The relative transport rates and diffusivities of grains under both oscillatory and uniform, steady flow conditions were quantified. These findings suggest that boundary layer formalisms should describe surface roughness through a representative grain size that is functionally dependent on the applied flow parameters. On the scale of 1--10m, experiments were performed to quantify the hydrodynamics and sediment capture efficiency of various baffles installed in a sediment retention pond, a commonly used sedimentation control measure in watershed applications. Analysis indicates that an optimum sediment capture effectiveness may be achieved based on baffle permeability, pond geometry and flow rate. Finally, on the scale of 10--1,000m, a distributed, bivariate watershed terain evolution module was developed within GRASS GIS. Simulation results for variable grain sizes and for distributed rainfall infiltration and land cover matched observations. Although a unique set of governing equations applies to each scale, an improved physics-based understanding of small and medium scale behavior may yield more accurate parameterization of key variables used in large scale predictive models.

  10. Topological Properties of Some Integrated Circuits for Very Large Scale Integration Chip Designs

    NASA Astrophysics Data System (ADS)

    Swanson, S.; Lanzerotti, M.; Vernizzi, G.; Kujawski, J.; Weatherwax, A.

    2015-03-01

    This talk presents topological properties of integrated circuits for Very Large Scale Integration chip designs. These circuits can be implemented in very large scale integrated circuits, such as those in high performance microprocessors. Prior work considered basic combinational logic functions and produced a mathematical framework based on algebraic topology for integrated circuits composed of logic gates. Prior work also produced an historically-equivalent interpretation of Mr. E. F. Rent's work for today's complex circuitry in modern high performance microprocessors, where a heuristic linear relationship was observed between the number of connections and number of logic gates. This talk will examine topological properties and connectivity of more complex functionally-equivalent integrated circuits. The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense or the U.S. Government.

  11. A novel iron-lead redox flow battery for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Ren, Y. X.

    2017-04-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies for the massive utilization of intermittent renewables especially wind and solar energy. This work presents a novel redox flow battery that utilizes inexpensive and abundant Fe(II)/Fe(III) and Pb/Pb(II) redox couples as redox materials. Experimental results show that both the Fe(II)/Fe(III) and Pb/Pb(II) redox couples have fast electrochemical kinetics in methanesulfonic acid, and that the coulombic efficiency and energy efficiency of the battery are, respectively, as high as 96.2% and 86.2% at 40 mA cm-2. Furthermore, the battery exhibits stable performance in terms of efficiencies and discharge capacities during the cycle test. The inexpensive redox materials, fast electrochemical kinetics and stable cycle performance make the present battery a promising candidate for large-scale energy storage applications.

  12. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs

    PubMed Central

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410

  13. Parameterization Interactions in Global Aquaplanet Simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.

    2018-02-01

    Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.

  14. Evaluating large-scale propensity score performance through real-world and synthetic data experiments.

    PubMed

    Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A

    2018-06-22

    Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.

  15. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  16. Interplay between Functional Connectivity and Scale-Free Dynamics in Intrinsic fMRI Networks

    PubMed Central

    Ciuciu, Philippe; Abry, Patrice; He, Biyu J.

    2014-01-01

    Studies employing functional connectivity-type analyses have established that spontaneous fluctuations in functional magnetic resonance imaging (fMRI) signals are organized within large-scale brain networks. Meanwhile, fMRI signals have been shown to exhibit 1/f-type power spectra – a hallmark of scale-free dynamics. We studied the interplay between functional connectivity and scale-free dynamics in fMRI signals, utilizing the fractal connectivity framework – a multivariate extension of the univariate fractional Gaussian noise model, which relies on a wavelet formulation for robust parameter estimation. We applied this framework to fMRI data acquired from healthy young adults at rest and performing a visual detection task. First, we found that scale-invariance existed beyond univariate dynamics, being present also in bivariate cross-temporal dynamics. Second, we observed that frequencies within the scale-free range do not contribute evenly to inter-regional connectivity, with a systematically stronger contribution of the lowest frequencies, both at rest and during task. Third, in addition to a decrease of the Hurst exponent and inter-regional correlations, task performance modified cross-temporal dynamics, inducing a larger contribution of the highest frequencies within the scale-free range to global correlation. Lastly, we found that across individuals, a weaker task modulation of the frequency contribution to inter-regional connectivity was associated with better task performance manifesting as shorter and less variable reaction times. These findings bring together two related fields that have hitherto been studied separately – resting-state networks and scale-free dynamics, and show that scale-free dynamics of human brain activity manifest in cross-regional interactions as well. PMID:24675649

  17. Supercapacitors specialities - Technology review

    NASA Astrophysics Data System (ADS)

    Münchgesang, Wolfram; Meisner, Patrick; Yushin, Gleb

    2014-06-01

    Commercial electrochemical capacitors (supercapacitors) are not limited to mobile electronics anymore, but have reached the field of large-scale applications, like smart grid, wind turbines, power for large scale ground, water and aerial transportation, energy-efficient industrial equipment and others. This review gives a short overview of the current state-of-the-art of electrochemical capacitors, their commercial applications and the impact of technological development on performance.

  18. Supercapacitors specialities - Technology review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Münchgesang, Wolfram; Meisner, Patrick; Yushin, Gleb

    2014-06-16

    Commercial electrochemical capacitors (supercapacitors) are not limited to mobile electronics anymore, but have reached the field of large-scale applications, like smart grid, wind turbines, power for large scale ground, water and aerial transportation, energy-efficient industrial equipment and others. This review gives a short overview of the current state-of-the-art of electrochemical capacitors, their commercial applications and the impact of technological development on performance.

  19. Infrastructure for Large-Scale Quality-Improvement Projects: Early Lessons from North Carolina Improving Performance in Practice

    ERIC Educational Resources Information Center

    Newton, Warren P.; Lefebvre, Ann; Donahue, Katrina E.; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Introduction: Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods: Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3)…

  20. Large-scale fortification of condiments and seasonings as a public health strategy: equity considerations for implementation.

    PubMed

    Zamora, Gerardo; Flores-Urrutia, Mónica Crissel; Mayén, Ana-Lucia

    2016-09-01

    Fortification of staple foods with vitamins and minerals is an effective approach to increase micronutrient intake and improve nutritional status. The specific use of condiments and seasonings as vehicles in large-scale fortification programs is a relatively new public health strategy. This paper underscores equity considerations for the implementation of large-scale fortification of condiments and seasonings as a public health strategy by examining nonexhaustive examples of programmatic experiences and pilot projects in various settings. An overview of conceptual elements in implementation research and equity is presented, followed by an examination of equity considerations for five implementation strategies: (1) enhancing the capabilities of the public sector, (2) improving the performance of implementing agencies, (3) strengthening the capabilities and performance of frontline workers, (3) empowering communities and individuals, and (4) supporting multiple stakeholders engaged in improving health. Finally, specific considerations related to intersectoral action are considered. Large-scale fortification of condiments and seasonings cannot be a standalone strategy and needs to be implemented with concurrent and coordinated public health strategies, which should be informed by a health equity lens. © 2016 New York Academy of Sciences.

  1. Optimization study for the experimental configuration of CMB-S4

    NASA Astrophysics Data System (ADS)

    Barron, Darcy; Chinone, Yuji; Kusaka, Akito; Borril, Julian; Errard, Josquin; Feeney, Stephen; Ferraro, Simone; Keskitalo, Reijo; Lee, Adrian T.; Roe, Natalie A.; Sherwin, Blake D.; Suzuki, Aritoki

    2018-02-01

    The CMB Stage 4 (CMB-S4) experiment is a next-generation, ground-based experiment that will measure the cosmic microwave background (CMB) polarization to unprecedented accuracy, probing the signature of inflation, the nature of cosmic neutrinos, relativistic thermal relics in the early universe, and the evolution of the universe. CMB-S4 will consist of O(500,000) photon-noise-limited detectors that cover a wide range of angular scales in order to probe the cosmological signatures from both the early and late universe. It will measure a wide range of microwave frequencies to cleanly separate the CMB signals from galactic and extra-galactic foregrounds. To advance the progress towards designing the instrument for CMB-S4, we have established a framework to optimize the instrumental configuration to maximize its scientific output. The framework combines cost and instrumental models with a cosmology forecasting tool, and evaluates the scientific sensitivity as a function of various instrumental parameters. The cost model also allows us to perform the analysis under a fixed-cost constraint, optimizing for the scientific output of the experiment given finite resources. In this paper, we report our first results from this framework, using simplified instrumental and cost models. We have primarily studied two classes of instrumental configurations: arrays of large-aperture telescopes with diameters ranging from 2–10 m, and hybrid arrays that combine small-aperture telescopes (0.5-m diameter) with large-aperture telescopes. We explore performance as a function of telescope aperture size, distribution of the detectors into different microwave frequencies, survey strategy and survey area, low-frequency noise performance, and balance between small and large aperture telescopes for hybrid configurations. Both types of configurations must cover both large (~ degree) and small (~ arcmin) angular scales, and the performance depends on assumptions for performance vs. angular scale. The configurations with large-aperture telescopes have a shallow optimum around 4–6 m in aperture diameter, assuming that large telescopes can achieve good performance for low-frequency noise. We explore some of the uncertainties of the instrumental model and cost parameters, and we find that the optimum has a weak dependence on these parameters. The hybrid configuration shows an even broader optimum, spanning a range of 4–10 m in aperture for the large telescopes. We also present two strawperson configurations as an outcome of this optimization study, and we discuss some ideas for improving our simple cost and instrumental models used here. There are several areas of this analysis that deserve further improvement. In our forecasting framework, we adopt a simple two-component foreground model with spatially varying power-law spectral indices. We estimate de-lensing performance statistically and ignore non-idealities such as anisotropic mode coverage, boundary effect, and possible foreground residual. Instrumental systematics, which is not accounted for in our analyses, may also influence the conceptual design. Further study of the instrumental and cost models will be one of the main areas of study by the entire CMB-S4 community. We hope that our framework will be useful for estimating the influence of these improvements in the future, and we will incorporate them in order to further improve the optimization.

  2. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    PubMed

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  3. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  4. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  5. Large Scale Application of Vibration Sensors for Fan Monitoring at Commercial Layer Hen Houses

    PubMed Central

    Chen, Yan; Ni, Ji-Qin; Diehl, Claude A.; Heber, Albert J.; Bogan, Bill W.; Chai, Li-Long

    2010-01-01

    Continuously monitoring the operation of each individual fan can significantly improve the measurement quality of aerial pollutant emissions from animal buildings that have a large number of fans. To monitor the fan operation by detecting the fan vibration is a relatively new technique. A low-cost electronic vibration sensor was developed and commercialized. However, its large scale application has not yet been evaluated. This paper presents long-term performance results of this vibration sensor at two large commercial layer houses. Vibration sensors were installed on 164 fans of 130 cm diameter to continuously monitor the fan on/off status for two years. The performance of the vibration sensors was compared with fan rotational speed (FRS) sensors. The vibration sensors exhibited quick response and high sensitivity to fan operations and therefore satisfied the general requirements of air quality research. The study proved that detecting fan vibration was an effective method to monitor the on/off status of a large number of single-speed fans. The vibration sensor itself was $2 more expensive than a magnetic proximity FRS sensor but the overall cost including installation and data acquisition hardware was $77 less expensive than the FRS sensor. A total of nine vibration sensors failed during the study and the failure rate was related to the batches of product. A few sensors also exhibited unsteady sensitivity. As a new product, the quality of the sensor should be improved to make it more reliable and acceptable. PMID:22163544

  6. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  7. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  8. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  9. Does Instructional Format Really Matter? Cognitive Load Theory, Multimedia and Teaching English Literature

    ERIC Educational Resources Information Center

    Martin, Stewart

    2012-01-01

    This article reports a quasi-experimental study on the effects of multimedia teaching and learning in English Literature--a subject which places high cognitive load on students. A large-scale study was conducted in 4 high-achieving secondary schools to examine the differences made to students' learning and performance by the use of multimedia and…

  10. A Comparative Study of Geometry in Elementary School Mathematics Textbooks from Five Countries

    ERIC Educational Resources Information Center

    Wang, Tzu-Ling; Yang, Der-Ching

    2016-01-01

    The purposes of this study were to compare the differences in the use of geometry in elementary school mathematics textbooks among Finland, Mainland China, Singapore, Taiwan, and the USA and to investigate the relationships between the design of the textbooks and students' performance on large-scale tests such as TIMSS-4 geometry, TIMSS-8…

  11. The Impact of Origin and Host Country Schooling on the Economic Performance of Immigrants

    ERIC Educational Resources Information Center

    Kanas, Agnieszka; van Tubergen, Frank

    2009-01-01

    This study examines the economic returns to schooling acquired in the country of origin and the country of destination. It uses large-scale survey data on Turkish, Moroccan, Surinamese and Antillean immigrants in the Netherlands, which contain direct measures of pre- and post migration schooling. It is studied whether the returns to origin-country…

  12. A Large-Scale Investigation into the Relationship between Attendance and Attainment: A Study Using an Innovative, Electronic Attendance Monitoring System

    ERIC Educational Resources Information Center

    Newman-Ford, Loretta; Fitzgibbon, Karen; Lloyd, Stephen; Thomas, Stephen

    2008-01-01

    The literature available on the relationship between student attendance and attainment is inconsistent. Nevertheless, there is some empirical evidence to suggest that attendance is a determinant of academic performance and progression. Colby published results of a study which examined the relationship within a single year 1 undergraduate module,…

  13. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  14. Stable isotope probing to study functional components of complex microbial ecosystems.

    PubMed

    Mazard, Sophie; Schäfer, Hendrik

    2014-01-01

    This protocol presents a method of dissecting the DNA or RNA of key organisms involved in a specific biochemical process within a complex ecosystem. Stable isotope probing (SIP) allows the labelling and separation of nucleic acids from community members that are involved in important biochemical transformations, yet are often not the most numerically abundant members of a community. This pure culture-independent technique circumvents limitations of traditional microbial isolation techniques or data mining from large-scale whole-community metagenomic studies to tease out the identities and genomic repertoires of microorganisms participating in biological nutrient cycles. SIP experiments can be applied to virtually any ecosystem and biochemical pathway under investigation provided a suitable stable isotope substrate is available. This versatile methodology allows a wide range of analyses to be performed, from fatty-acid analyses, community structure and ecology studies, and targeted metagenomics involving nucleic acid sequencing. SIP experiments provide an effective alternative to large-scale whole-community metagenomic studies by specifically targeting the organisms or biochemical transformations of interest, thereby reducing the sequencing effort and time-consuming bioinformatics analyses of large datasets.

  15. A KPI-based process monitoring and fault detection framework for large-scale processes.

    PubMed

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang

    2017-05-01

    Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. A comparison of working in small-scale and large-scale nursing homes: A systematic review of quantitative and qualitative evidence.

    PubMed

    Vermeerbergen, Lander; Van Hootegem, Geert; Benders, Jos

    2017-02-01

    Ongoing shortages of care workers, together with an ageing population, make it of utmost importance to increase the quality of working life in nursing homes. Since the 1970s, normalised and small-scale nursing homes have been increasingly introduced to provide care in a family and homelike environment, potentially providing a richer work life for care workers as well as improved living conditions for residents. 'Normalised' refers to the opportunities given to residents to live in a manner as close as possible to the everyday life of persons not needing care. The study purpose is to provide a synthesis and overview of empirical research comparing the quality of working life - together with related work and health outcomes - of professional care workers in normalised small-scale nursing homes as compared to conventional large-scale ones. A systematic review of qualitative and quantitative studies. A systematic literature search (April 2015) was performed using the electronic databases Pubmed, Embase, PsycInfo, CINAHL and Web of Science. References and citations were tracked to identify additional, relevant studies. We identified 825 studies in the selected databases. After checking the inclusion and exclusion criteria, nine studies were selected for review. Two additional studies were selected after reference and citation tracking. Three studies were excluded after requesting more information on the research setting. The findings from the individual studies suggest that levels of job control and job demands (all but "time pressure") are higher in normalised small-scale homes than in conventional large-scale nursing homes. Additionally, some studies suggested that social support and work motivation are higher, while risks of burnout and mental strain are lower, in normalised small-scale nursing homes. Other studies found no differences or even opposing findings. The studies reviewed showed that these inconclusive findings can be attributed to care workers in some normalised small-scale homes experiencing isolation and too high job demands in their work roles. This systematic review suggests that normalised small-scale homes are a good starting point for creating a higher quality of working life in the nursing home sector. Higher job control enables care workers to manage higher job demands in normalised small-scale homes. However, some jobs would benefit from interventions to address care workers' perceptions of too low social support and of too high job demands. More research is needed to examine strategies to enhance these working life issues in normalised small-scale settings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Full-scale hingeless rotor performance and loads

    NASA Technical Reports Server (NTRS)

    Peterson, Randall L.

    1995-01-01

    A full-scale BO-105 hingeless rotor system was tested in the NASA Ames 40- by 80-Foot Wind Tunnel on the rotor test apparatus. Rotor performance, rotor loads, and aeroelastic stability as functions of both collective and cyclic pitch, tunnel velocity, and shaft angle were investigated. This test was performed in support of the Rotor Data Correlation Task under the U.S. Army/German Memorandum of Understanding on Cooperative Research in the Field of Helicopter Aeromechanics. The primary objective of this test program was to create a data base for full-scale hingeless rotor performance and structural blade loads. A secondary objective was to investigate the ability to match flight test conditions in the wind tunnel. This data base can be used for the experimental and analytical studies of hingeless rotor systems over large variations in rotor thrust and tunnel velocity. Rotor performance and structural loads for tunnel velocities from hover to 170 knots and thrust coefficients (C(sub T)/sigma) from 0.0 to 0.12 are presented in this report. Thrust sweeps at tunnel velocities of 10, 20, and 30 knots are also included in this data set.

  18. Development and Evaluation of a Parallel Reaction Monitoring Strategy for Large-Scale Targeted Metabolomics Quantification.

    PubMed

    Zhou, Juntuo; Liu, Huiying; Liu, Yang; Liu, Jia; Zhao, Xuyang; Yin, Yuxin

    2016-04-19

    Recent advances in mass spectrometers which have yielded higher resolution and faster scanning speeds have expanded their application in metabolomics of diverse diseases. Using a quadrupole-Orbitrap LC-MS system, we developed an efficient large-scale quantitative method targeting 237 metabolites involved in various metabolic pathways using scheduled, parallel reaction monitoring (PRM). We assessed the dynamic range, linearity, reproducibility, and system suitability of the PRM assay by measuring concentration curves, biological samples, and clinical serum samples. The quantification performances of PRM and MS1-based assays in Q-Exactive were compared, and the MRM assay in QTRAP 6500 was also compared. The PRM assay monitoring 237 polar metabolites showed greater reproducibility and quantitative accuracy than MS1-based quantification and also showed greater flexibility in postacquisition assay refinement than the MRM assay in QTRAP 6500. We present a workflow for convenient PRM data processing using Skyline software which is free of charge. In this study we have established a reliable PRM methodology on a quadrupole-Orbitrap platform for evaluation of large-scale targeted metabolomics, which provides a new choice for basic and clinical metabolomics study.

  19. Application of Dynamic Mode Decomposition: Temporal Evolution of Flow Structures in an Aneurysm

    NASA Astrophysics Data System (ADS)

    Conlin, William; Yu, Paulo; Durgesh, Vibhav

    2017-11-01

    An aneurysm is an enlargement of a weakened arterial wall that can be fatal or debilitating on rupture. Aneurysm hemodynamics is integral to developing an understanding of aneurysm formation, growth, and rupture. The flow in an aneurysm exhibits complex fluid dynamics behavior due to an inherent unsteady inflow condition and its interactions with large-scale flow structures present in the aneurysm. The objective of this study is to identify the large-scale structures in the aneurysm, study temporal behavior, and quantify their interaction with the inflow condition. For this purpose, detailed Particle Image Velocimetry (PIV) measurements were performed at the center plane of an idealized aneurysm model for a range of inflow conditions. Inflow conditions were precisely controlled using a ViVitro SuperPump system. Dynamic Modal Decomposition (DMD) of the velocity field was used to identify coherent structures and their temporal behavior. DMD was successful in capturing the large-scale flow structures and their temporal behavior. A low dimensional approximation to the flow field was obtained with the most relevant dynamic modes and was used to obtain temporal information about the coherent structures and their interaction with the inflow, formation, evolution, and growth.

  20. Performance of distributed multiscale simulations

    PubMed Central

    Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.

    2014-01-01

    Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258

  1. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  2. Higher Education Teachers' Descriptions of Their Own Learning: A Large-Scale Study of Finnish Universities of Applied Sciences

    ERIC Educational Resources Information Center

    Töytäri, Aija; Piirainen, Arja; Tynjälä, Päivi; Vanhanen-Nuutinen, Liisa; Mäki, Kimmo; Ilves, Vesa

    2016-01-01

    In this large-scale study, higher education teachers' descriptions of their own learning were examined with qualitative analysis involving application of principles of phenomenographic research. This study is unique: it is unusual to use large-scale data in qualitative studies. The data were collected through an e-mail survey sent to 5960 teachers…

  3. Variational study on the vibrational level structure and vibrational level mixing of highly vibrationally excited S₀ D₂CO.

    PubMed

    Rashev, Svetoslav; Moule, David C; Rashev, Vladimir

    2012-11-01

    We perform converged high precision variational calculations to determine the frequencies of a large number of vibrational levels in S(0) D(2)CO, extending from low to very high excess vibrational energies. For the calculations we use our specific vibrational method (recently employed for studies on H(2)CO), consisting of a combination of a search/selection algorithm and a Lanczos iteration procedure. Using the same method we perform large scale converged calculations on the vibrational level spectral structure and fragmentation at selected highly excited overtone states, up to excess vibrational energies of ∼17,000 cm(-1), in order to study the characteristics of intramolecular vibrational redistribution (IVR), vibrational level density and mode selectivity. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Genetic Approaches to Study Meiosis and Meiosis-Specific Gene Expression in Saccharomyces cerevisiae.

    PubMed

    Kassir, Yona; Stuart, David T

    2017-01-01

    The budding yeast Saccharomyces cerevisiae has a long history as a model organism for studies of meiosis and the cell cycle. The popularity of this yeast as a model is in large part due to the variety of genetic and cytological approaches that can be effectively performed with the cells. Cultures of the cells can be induced to synchronously progress through meiosis and sporulation allowing large-scale gene expression and biochemical studies to be performed. Additionally, the spore tetrads resulting from meiosis make it possible to characterize the haploid products of meiosis allowing investigation of meiotic recombination and chromosome segregation. Here we describe genetic methods for analysis progression of S. cerevisiae through meiosis and sporulation with an emphasis on strategies for the genetic analysis of regulators of meiosis-specific genes.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  6. Large scale integration of graphene transistors for potential applications in the back end of the line

    NASA Astrophysics Data System (ADS)

    Smith, A. D.; Vaziri, S.; Rodriguez, S.; Östling, M.; Lemme, M. C.

    2015-06-01

    A chip to wafer scale, CMOS compatible method of graphene device fabrication has been established, which can be integrated into the back end of the line (BEOL) of conventional semiconductor process flows. In this paper, we present experimental results of graphene field effect transistors (GFETs) which were fabricated using this wafer scalable method. The carrier mobilities in these transistors reach up to several hundred cm2 V-1 s-1. Further, these devices exhibit current saturation regions similar to graphene devices fabricated using mechanical exfoliation. The overall performance of the GFETs can not yet compete with record values reported for devices based on mechanically exfoliated material. Nevertheless, this large scale approach is an important step towards reliability and variability studies as well as optimization of device aspects such as electrical contacts and dielectric interfaces with statistically relevant numbers of devices. It is also an important milestone towards introducing graphene into wafer scale process lines.

  7. Relay discovery and selection for large-scale P2P streaming

    PubMed Central

    Zhang, Chengwei; Wang, Angela Yunxian

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384

  8. Relay discovery and selection for large-scale P2P streaming.

    PubMed

    Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.

  9. A review of public desert land lease policies for concentrated solar power plants and the impact on their economic performance

    NASA Astrophysics Data System (ADS)

    Smyrnakis, Christos; Phocas-Cosmetatos, Alex; Kynigalakis, Kostantinos

    2016-05-01

    Large scale Concentrated Solar Power (CSP) plants need large plots of land with very high solar resource and thus are often deployed in desert areas which are usually owned by the state or a municipal authority. This study discusses the implication and practices of land lease policies with regards to CSP development. The strategy followed on a land lease is examined by definition on a case-specific basis and this text is by no means exhaustive with regards to its content. The study also discusses the pricing of land in various cases, presents the governing types of land lease and their effect on the economic performance of hypothetical CSP projects under various cases.

  10. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  11. Large scale preparation and crystallization of neuron-specific enolase.

    PubMed

    Ishioka, N; Isobe, T; Kadoya, T; Okuyama, T; Nakajima, T

    1984-03-01

    A simple method has been developed for the large scale purification of neuron-specific enolase [EC 4.2.1.11]. The method consists of ammonium sulfate fractionation of brain extract, and two subsequent column chromatography steps on DEAE Sephadex A-50. The chromatography was performed on a short (25 cm height) and thick (8.5 cm inside diameter) column unit that was specially devised for the large scale preparation. The purified enolase was crystallized in 0.05 M imidazole-HCl buffer containing 1.6 M ammonium sulfate (pH 6.39), with a yield of 0.9 g/kg of bovine brain tissue.

  12. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.

  13. Cosmological measurements with general relativistic galaxy correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raccanelli, Alvise; Montanari, Francesco; Durrer, Ruth

    We investigate the cosmological dependence and the constraining power of large-scale galaxy correlations, including all redshift-distortions, wide-angle, lensing and gravitational potential effects on linear scales. We analyze the cosmological information present in the lensing convergence and in the gravitational potential terms describing the so-called ''relativistic effects'', and we find that, while smaller than the information contained in intrinsic galaxy clustering, it is not negligible. We investigate how neglecting them does bias cosmological measurements performed by future spectroscopic and photometric large-scale surveys such as SKA and Euclid. We perform a Fisher analysis using the CLASS code, modified to include scale-dependent galaxymore » bias and redshift-dependent magnification and evolution bias. Our results show that neglecting relativistic terms, especially lensing convergence, introduces an error in the forecasted precision in measuring cosmological parameters of the order of a few tens of percent, in particular when measuring the matter content of the Universe and primordial non-Gaussianity parameters. The analysis suggests a possible substantial systematic error in cosmological parameter constraints. Therefore, we argue that radial correlations and integrated relativistic terms need to be taken into account when forecasting the constraining power of future large-scale number counts of galaxy surveys.« less

  14. PSI Wide Area Product (WAP) for measuring Ground Surface Displacements at regional level for multi-hazards studies

    NASA Astrophysics Data System (ADS)

    Duro, Javier; Iglesias, Rubén; Blanco, Pablo; Albiol, David; Koudogbo, Fifamè

    2015-04-01

    The Wide Area Product (WAP) is a new interferometric product developed to provide measurement over large regions. Persistent Scatterers Interferometry (PSI) has largely proved their robust and precise performance in measuring ground surface deformation in different application domains. In this context, however, the accurate displacement estimation over large-scale areas (more than 10.000 km2) characterized by low magnitude motion gradients (3-5 mm/year), such as the ones induced by inter-seismic or Earth tidal effects, still remains an open issue. The main reason for that is the inclusion of low quality and more distant persistent scatterers in order to bridge low-quality areas, such as water bodies, crop areas and forested regions. This fact yields to spatial propagation errors on PSI integration process, poor estimation and compensation of the Atmospheric Phase Screen (APS) and the difficult to face residual long-wavelength phase patterns originated by orbit state vectors inaccuracies. Research work for generating a Wide Area Product of ground motion in preparation for the Sentinel-1 mission has been conducted in the last stages of Terrafirma as well as in other research programs. These developments propose technological updates for keeping the precision over large scale PSI analysis. Some of the updates are based on the use of external information, like meteorological models, and the employment of GNSS data for an improved calibration of large measurements. Usually, covering wide regions implies the processing over areas with a land use which is chiefly focused on livestock, horticulture, urbanization and forest. This represents an important challenge for providing continuous InSAR measurements and the application of advanced phase filtering strategies to enhance the coherence. The advanced PSI processing has been performed out over several areas, allowing a large scale analysis of tectonic patterns, and motion caused by multi-hazards as volcanic, landslide and flood. Several examples of the application of the PSI WAP to wide regions for measuring ground displacements related to different types of hazards, natural and human induced will be presented. The InSAR processing approach to measure accurate movements at local and large scales for allowing multi-hazard interpretation studies will also be discussed. The test areas will show deformations related to active faults systems, landslides in mountains slopes, ground compaction over underneath aquifers and movements in volcanic areas.

  15. Variations of characteristic time scales in rotating stratified turbulence using a large parametric numerical study.

    PubMed

    Rosenberg, D; Marino, R; Herbert, C; Pouquet, A

    2016-01-01

    We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow.

  16. Global Simulations of Dynamo and Magnetorotational Instability in Madison Plasma Experiments and Astrophysical Disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebrahimi, Fatima

    2014-07-31

    Large-scale magnetic fields have been observed in widely different types of astrophysical objects. These magnetic fields are believed to be caused by the so-called dynamo effect. Could a large-scale magnetic field grow out of turbulence (i.e. the alpha dynamo effect)? How could the topological properties and the complexity of magnetic field as a global quantity, the so called magnetic helicity, be important in the dynamo effect? In addition to understanding the dynamo mechanism in astrophysical accretion disks, anomalous angular momentum transport has also been a longstanding problem in accretion disks and laboratory plasmas. To investigate both dynamo and momentum transport,more » we have performed both numerical modeling of laboratory experiments that are intended to simulate nature and modeling of configurations with direct relevance to astrophysical disks. Our simulations use fluid approximations (Magnetohydrodynamics - MHD model), where plasma is treated as a single fluid, or two fluids, in the presence of electromagnetic forces. Our major physics objective is to study the possibility of magnetic field generation (so called MRI small-scale and large-scale dynamos) and its role in Magneto-rotational Instability (MRI) saturation through nonlinear simulations in both MHD and Hall regimes.« less

  17. Scaling Fiber Lasers to Large Mode Area: An Investigation of Passive Mode-Locking Using a Multi-Mode Fiber

    PubMed Central

    Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W.

    2011-01-01

    The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg–Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers. PMID:21731106

  18. Scaling Fiber Lasers to Large Mode Area: An Investigation of Passive Mode-Locking Using a Multi-Mode Fiber.

    PubMed

    Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W

    2011-01-01

    The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg-Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers.

  19. Large-scale multi-stage constructed wetlands for secondary effluents treatment in northern China: Carbon dynamics.

    PubMed

    Wu, Haiming; Fan, Jinlin; Zhang, Jian; Ngo, Huu Hao; Guo, Wenshan

    2018-02-01

    Multi-stage constructed wetlands (CWs) have been proved to be a cost-effective alternative in the treatment of various wastewaters for improving the treatment performance as compared with the conventional single-stage CWs. However, few long-term full-scale multi-stage CWs have been performed and evaluated for polishing effluents from domestic wastewater treatment plants (WWTP). This study investigated the seasonal and spatial dynamics of carbon and the effects of the key factors (input loading and temperature) in the large-scale seven-stage Wu River CW polishing domestic WWTP effluents in northern China. The results indicated a significant improvement in water quality. Significant seasonal and spatial variations of organics removal were observed in the Wu River CW with a higher COD removal efficiency of 64-66% in summer and fall. Obvious seasonal and spatial variations of CH 4 and CO 2 emissions were also found with the average CH 4 and CO 2 emission rates of 3.78-35.54 mg m -2 d -1 and 610.78-8992.71 mg m -2 d -1 , respectively, while the higher CH 4 and CO 2 emission flux was obtained in spring and summer. Seasonal air temperatures and inflow COD loading rates significantly affected organics removal and CH 4 emission, but they appeared to have a weak influence on CO 2 emission. Overall, this study suggested that large-scale Wu River CW might be a potential source of GHG, but considering the sustainability of the multi-stage CW, the inflow COD loading rate of 1.8-2.0 g m -2 d -1 and temperature of 15-20 °C may be the suitable condition for achieving the higher organics removal efficiency and lower greenhouse gases (GHG) emission in polishing the domestic WWTP effluent. The obtained knowledge of the carbon dynamics in large-scale Wu River CW will be helpful for understanding the carbon cycles, but also can provide useful field experience for the design, operation and management of multi-stage CW treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Solving large scale structure in ten easy steps with COLA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less

  1. Process and Learning Outcomes from Remotely-Operated, Simulated, and Hands-on Student Laboratories

    ERIC Educational Resources Information Center

    Corter, James E.; Esche, Sven K.; Chassapis, Constantin; Ma, Jing; Nickerson, Jeffrey V.

    2011-01-01

    A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the…

  2. Student Performance and Attitudes in a Collaborative and Flipped Linear Algebra Course

    ERIC Educational Resources Information Center

    Murphy, Julia; Chang, Jen-Mei; Suaray, Kagba

    2016-01-01

    Flipped learning is gaining traction in K-12 for enhancing students' problem-solving skills at an early age; however, there is relatively little large-scale research showing its effectiveness in promoting better learning outcomes in higher education, especially in mathematics classes. In this study, we examined the data compiled from both…

  3. The Stability of Rater Severity in Large-Scale Assessment Programs.

    ERIC Educational Resources Information Center

    Congdon, Peter J.; McQueen, Joy

    2000-01-01

    Studied the stability of rater severity over an extended rating period by applying multifaceted Rasch analysis to ratings of 16 raters of writing performances of 8,285 elementary school students. Findings cast doubt on the practice of using a single calibration of rate severity as the basis for adjustment of person measures. (SLD)

  4. Heritability in Cognitive Performance: Evidence Using Computer-Based Testing

    ERIC Educational Resources Information Center

    Hervey, Aaron S.; Greenfield, Kathryn; Gualtieri, C. Thomas

    2012-01-01

    There is overwhelming evidence of genetic influence on cognition. The effect is seen in general cognitive ability, as well as in specific cognitive domains. A conventional assessment approach using face-to-face paper and pencil testing is difficult for large-scale studies. Computerized neurocognitive testing is a suitable alternative. A total of…

  5. Field scale test of multi-dimensional flow and morphodynamic simulations used for restoration design analysis

    USGS Publications Warehouse

    McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.

  6. Neurodevelopmental alterations of large-scale structural networks in children with new-onset epilepsy

    PubMed Central

    Bonilha, Leonardo; Tabesh, Ali; Dabbs, Kevin; Hsu, David A.; Stafstrom, Carl E.; Hermann, Bruce P.; Lin, Jack J.

    2014-01-01

    Recent neuroimaging and behavioral studies have revealed that children with new onset epilepsy already exhibit brain structural abnormalities and cognitive impairment. How the organization of large-scale brain structural networks is altered near the time of seizure onset and whether network changes are related to cognitive performances remain unclear. Recent studies also suggest that regional brain volume covariance reflects synchronized brain developmental changes. Here, we test the hypothesis that epilepsy during early-life is associated with abnormalities in brain network organization and cognition. We used graph theory to study structural brain networks based on regional volume covariance in 39 children with new-onset seizures and 28 healthy controls. Children with new-onset epilepsy showed a suboptimal topological structural organization with enhanced network segregation and reduced global integration compared to controls. At the regional level, structural reorganization was evident with redistributed nodes from the posterior to more anterior head regions. The epileptic brain network was more vulnerable to targeted but not random attacks. Finally, a subgroup of children with epilepsy, namely those with lower IQ and poorer executive function, had a reduced balance between network segregation and integration. Taken together, the findings suggest that the neurodevelopmental impact of new onset childhood epilepsies alters large-scale brain networks, resulting in greater vulnerability to network failure and cognitive impairment. PMID:24453089

  7. Multi-scale Modeling of Arctic Clouds

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  8. Development of superconductor magnetic suspension and balance prototype facility for studying the feasibility of applying this technique to large scale aerodynamic testing

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1975-01-01

    The unique design and operational characteristics of a prototype magnetic suspension and balance facility which utilizes superconductor technology are described and discussed from the point of view of scalability to large sizes. The successful experimental demonstration of the feasibility of this new magnetic suspension concept of the University of Virginia, together with the success of the cryogenic wind-tunnel concept developed at Langley Research Center, appear to have finally opened the way to clean-tunnel, high-Re aerodynamic testing. Results of calculations corresponding to a two-step design extrapolation from the observed performance of the prototype magnetic suspension system to a system compatible with the projected cryogenic transonic research tunnel are presented to give an order-of-magnitude estimate of expected performance characteristics. Research areas where progress should lead to improved design and performance of large facilities are discussed.

  9. Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.

    PubMed

    Baars, W J; Hutchins, N; Marusic, I

    2017-03-13

    Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  10. Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2012-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less

  11. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    NASA Astrophysics Data System (ADS)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p < 0.01) in the Everglades of Florida over the years 1996-2001. The modeled monthly soil moisture for Illinois of the US agrees well (R2 = 0.79, p < 0.01) with the observed over the years 1984-2001. The modeled monthly stream flow for most 12 major rivers in the US is consistent R2 > 0.46, p < 0.01; Nash-Sutcliffe Coefficients >0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  12. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    NASA Astrophysics Data System (ADS)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  13. Large- and Very-Large-Scale Motions in Katabatic Flows Over Steep Slopes

    NASA Astrophysics Data System (ADS)

    Giometto, M. G.; Fang, J.; Salesky, S.; Parlange, M. B.

    2016-12-01

    Evidence of large- and very-large-scale motions populating the boundary layer in katabatic flows over steep slopes is presented via direct numerical simulations (DNSs). DNSs are performed at a modified Reynolds number (Rem = 967), considering four sloping angles (α = 60°, 70°, 80° and 90°). Large coherent structures prove to be strongly dependent on the inclination of the underlying surface. Spectra and co-spectra consistently show signatures of large-scale motions (LSMs), with streamwise extension on the order of the boundary layer thickness. A second low-wavenumber mode characterizes pre-multiplied spectra and co-spectra when the slope angle is below 70°, indicative of very-large-scale motions (VLSMs). In addition, conditional sampling and averaging shows how LSMs and VLSMs are induced by counter-rotating roll modes, in agreement with findings from canonical wall-bounded flows. VLSMs contribute to the stream-wise velocity variance and shear stress in the above-jet regions up to 30% and 45% respectively, whereas both LSMs and VLSMs are inactive in the near-wall regions.

  14. Synchronization of coupled large-scale Boolean networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fangfei, E-mail: li-fangfei@163.com

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  15. Development of a database system for mapping insertional mutations onto the mouse genome with large-scale experimental data

    PubMed Central

    2009-01-01

    Background Insertional mutagenesis is an effective method for functional genomic studies in various organisms. It can rapidly generate easily tractable mutations. A large-scale insertional mutagenesis with the piggyBac (PB) transposon is currently performed in mice at the Institute of Developmental Biology and Molecular Medicine (IDM), Fudan University in Shanghai, China. This project is carried out via collaborations among multiple groups overseeing interconnected experimental steps and generates a large volume of experimental data continuously. Therefore, the project calls for an efficient database system for recording, management, statistical analysis, and information exchange. Results This paper presents a database application called MP-PBmice (insertional mutation mapping system of PB Mutagenesis Information Center), which is developed to serve the on-going large-scale PB insertional mutagenesis project. A lightweight enterprise-level development framework Struts-Spring-Hibernate is used here to ensure constructive and flexible support to the application. The MP-PBmice database system has three major features: strict access-control, efficient workflow control, and good expandability. It supports the collaboration among different groups that enter data and exchange information on daily basis, and is capable of providing real time progress reports for the whole project. MP-PBmice can be easily adapted for other large-scale insertional mutation mapping projects and the source code of this software is freely available at http://www.idmshanghai.cn/PBmice. Conclusion MP-PBmice is a web-based application for large-scale insertional mutation mapping onto the mouse genome, implemented with the widely used framework Struts-Spring-Hibernate. This system is already in use by the on-going genome-wide PB insertional mutation mapping project at IDM, Fudan University. PMID:19958505

  16. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  17. Potential of dynamically harmonized Fourier transform ion cyclotron resonance cell for high-throughput metabolomics fingerprinting: control of data quality.

    PubMed

    Habchi, Baninia; Alves, Sandra; Jouan-Rimbaud Bouveresse, Delphine; Appenzeller, Brice; Paris, Alain; Rutledge, Douglas N; Rathahao-Paris, Estelle

    2018-01-01

    Due to the presence of pollutants in the environment and food, the assessment of human exposure is required. This necessitates high-throughput approaches enabling large-scale analysis and, as a consequence, the use of high-performance analytical instruments to obtain highly informative metabolomic profiles. In this study, direct introduction mass spectrometry (DIMS) was performed using a Fourier transform ion cyclotron resonance (FT-ICR) instrument equipped with a dynamically harmonized cell. Data quality was evaluated based on mass resolving power (RP), mass measurement accuracy, and ion intensity drifts from the repeated injections of quality control sample (QC) along the analytical process. The large DIMS data size entails the use of bioinformatic tools for the automatic selection of common ions found in all QC injections and for robustness assessment and correction of eventual technical drifts. RP values greater than 10 6 and mass measurement accuracy of lower than 1 ppm were obtained using broadband mode resulting in the detection of isotopic fine structure. Hence, a very accurate relative isotopic mass defect (RΔm) value was calculated. This reduces significantly the number of elemental composition (EC) candidates and greatly improves compound annotation. A very satisfactory estimate of repeatability of both peak intensity and mass measurement was demonstrated. Although, a non negligible ion intensity drift was observed for negative ion mode data, a normalization procedure was easily applied to correct this phenomenon. This study illustrates the performance and robustness of the dynamically harmonized FT-ICR cell to perform large-scale high-throughput metabolomic analyses in routine conditions. Graphical abstract Analytical performance of FT-ICR instrument equipped with a dynamically harmonized cell.

  18. Cartographic mapping study

    NASA Technical Reports Server (NTRS)

    Wilson, C.; Dye, R.; Reed, L.

    1982-01-01

    The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.

  19. Longitudinal aerodynamic characteristics of a large scale model with a swept wing and augmented jet flap in ground effect

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.

    1972-01-01

    The investigation of the in-ground-effect, longitudinal aerodynamic characteristics of a large scale swept augmentor wing model is presented, using 40 x 80 ft wind tunnel. The investigation was conducted at three ground heights; h/c equals 2.01, 1.61, and 1.34. The induced effect of underwing nacelles, was studied with two powered nacelle configurations. One configuration used four JT-15D turbofans while the other used two J-85 turbojet engines. Two conical nozzles on each J-85 were used to deflect the thrust at angles from 0 to 120 deg. Tests were also performed without nacelles to allow comparison with previous data from ground effect.

  20. Evolution of Scaling Emergence in Large-Scale Spatial Epidemic Spreading

    PubMed Central

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Background Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. Methodology/Principal Findings In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. Conclusions/Significance The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease. PMID:21747932

  1. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE PAGES

    Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...

    2017-02-16

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  2. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.; Halsey, William; Dehoff, Ryan

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  3. Overview of large scale experiments performed within the LBB project in the Czech Republic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadecka, P.; Lauerova, D.

    1997-04-01

    During several recent years NRI Rez has been performing the LBB analyses of safety significant primary circuit pipings of NPPs in Czech and Slovak Republics. The analyses covered the NPPs with reactors WWER 440 Type 230 and 213 and WWER 1000 Type 320. Within the relevant LBB projects undertaken with the aim to prove the fulfilling of the requirements of LBB, a series of large scale experiments were performed. The goal of these experiments was to verify the properties of the components selected, and to prove the quality and/or conservatism of assessments used in the LBB-analyses. In this poster, amore » brief overview of experiments performed in Czech Republic under guidance of NRI Rez is presented.« less

  4. Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.

    PubMed

    Chen, Mou; Tao, Gang

    2016-08-01

    In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.

  5. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.

  6. Detonation failure characterization of non-ideal explosives

    NASA Astrophysics Data System (ADS)

    Janesheski, Robert S.; Groven, Lori J.; Son, Steven

    2012-03-01

    Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.

  7. Seismic safety in conducting large-scale blasts

    NASA Astrophysics Data System (ADS)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  8. A behavioral-level HDL description of SFQ logic circuits for quantitative performance analysis of large-scale SFQ digital systems

    NASA Astrophysics Data System (ADS)

    Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.

    2003-10-01

    Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.

  9. Scale-space measures for graph topology link protein network architecture to function.

    PubMed

    Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen

    2014-06-15

    The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.

  10. Towards a simple representation of chalk hydrology in land surface modelling

    NASA Astrophysics Data System (ADS)

    Rahman, Mostaquimur; Rosolem, Rafael

    2017-01-01

    Modelling and monitoring of hydrological processes in the unsaturated zone of chalk, a porous medium with fractures, is important to optimize water resource assessment and management practices in the United Kingdom (UK). However, incorporating the processes governing water movement through a chalk unsaturated zone in a numerical model is complicated mainly due to the fractured nature of chalk that creates high-velocity preferential flow paths in the subsurface. In general, flow through a chalk unsaturated zone is simulated using the dual-porosity concept, which often involves calibration of a relatively large number of model parameters, potentially undermining applications to large regions. In this study, a simplified parameterization, namely the Bulk Conductivity (BC) model, is proposed for simulating hydrology in a chalk unsaturated zone. This new parameterization introduces only two additional parameters (namely the macroporosity factor and the soil wetness threshold parameter for fracture flow activation) and uses the saturated hydraulic conductivity from the chalk matrix. The BC model is implemented in the Joint UK Land Environment Simulator (JULES) and applied to a study area encompassing the Kennet catchment in the southern UK. This parameterization is further calibrated at the point scale using soil moisture profile observations. The performance of the calibrated BC model in JULES is assessed and compared against the performance of both the default JULES parameterization and the uncalibrated version of the BC model implemented in JULES. Finally, the model performance at the catchment scale is evaluated against independent data sets (e.g. runoff and latent heat flux). The results demonstrate that the inclusion of the BC model in JULES improves simulated land surface mass and energy fluxes over the chalk-dominated Kennet catchment. Therefore, the simple approach described in this study may be used to incorporate the flow processes through a chalk unsaturated zone in large-scale land surface modelling applications.

  11. Assessing the performance of multi-purpose channel management measures at increasing scales

    NASA Astrophysics Data System (ADS)

    Wilkinson, Mark; Addy, Steve

    2016-04-01

    In addition to hydroclimatic drivers, sediment deposition from high energy river systems can reduce channel conveyance capacity and lead to significant increases in flood risk. There is an increasing recognition that we need to work with the interplay of natural hydrological and morphological processes in order to attenuate flood flows and manage sediment (both coarse and fine). This typically includes both catchment (e.g. woodland planting, wetlands) and river (e.g. wood placement, floodplain reconnection) restoration approaches. The aim of this work was to assess at which scales channel management measures (notably wood placement and flood embankment removal) are most appropriate for flood and sediment management in high energy upland river systems. We present research findings from two densely instrumented research sites in Scotland which regularly experience flood events and have associated coarse sediment problems. We assessed the performance of a range of novel trial measures for three different scales: wooded flow restrictors and gully tree planting at the small scale (<1 km2), floodplain tree planting and engineered log jams at the intermediate scale (5-60 km2), and flood embankment lowering at the large scale (350 km2). Our results suggest that at the smallest scale, care is needed in the installation of flow restrictors. It was found for some restrictors that vertical erosion can occur if the tributary channel bed is disturbed. Preliminary model evidence suggested they have a very limited impact on channel discharge and flood peak delay owing to the small storage areas behind the structures. At intermediate scales, the ability to trap sediment by engineered log jams was limited. Of the 45 engineered log jams installed, around half created a small geomorphic response and only 5 captured a significant amount of coarse material (during one large flood event). As scale increases, the chance of damage or loss of wood placement is greatest. Monitoring highlights the importance of structure design (porosity and degree of channel blockage) and placement in zones of high sediment transport to optimise performance. At the large scale, well designed flood embankment lowering can improve connectivity to the floodplain during low to medium return period events. However, ancillary works to stabilise the bank failed thus emphasising the importance of letting natural processes readjust channel morphology and hydrological connections to the floodplain. Although these trial measures demonstrated limited effects, this may be in part owing to restrictions in the range of hydroclimatological conditions during the study period and further work is needed to assess the performance under more extreme conditions. This work will contribute to refining guidance for managing channel coarse sediment problems in the future which in turn could help mitigate flooding using natural approaches.

  12. The influence of Seychelles Dome on the large scale Tropical Variability

    NASA Astrophysics Data System (ADS)

    Manola, Iris; Selten, Frank; Hazeleger, Wilco

    2013-04-01

    The Seychelles Dome (SD) is the thermocline ridge just South of the equator in the Western Indian Ocean basin. It is characterized by strong atmospheric convection and a shallow thermocline and is associated with large intraseasonal convection and SST variability (Harrison and Vecchi 2001). The SD is influenced by surface and subsurface processes, such as air-sea fluxes, Ekman upwelling from wind stress curl, ocean dynamics (vertical mixing) and oceanic Rossby waves from southeastern Indian Ocean. The favoring season for a strong SD is the boreal winter, where the thermocline is most shallow. Then the southeasterly trade winds converge with the northwesterly monsoonal winds over the intertropical convergence zone and cause cyclonic wind stress curl that drives Ekman divergence and a ridging of the thermocline. It is found that the subseasonal and interranual variability of the SD is influenced by large scale events, such as the Indian Ocean Dipole (IOD), the ENSO and the Madden-Julian Oscillation (MJO) (Tozuka et al., 2010, Lloyd and Vecchi, 2010). The SD is enhanced by cooling events in the Western Indian Ocean and easterly winds that raise the thermocline and increase the upwelling. This can be associated with a strong Walker circulation, like negative IOD conditions or La Nina-like conditions. So far the studies focus on the origins of the SD variability, but the influence of the SD itself on regional or large scale climate is largely unknown. In this study we focus on the influence of the SD variations on the large scale tropical circulation. We analyze the covariance of the SD variations and the tropical circulation in a 200 year control imulation of the climate model EC-EARTH and perform idealized SST forced simulations to study the character of the atmospheric response and its relation to ENSO, IOD and MJO. References -Harrison, D. E. and G. A. Vecchi, 2001: January 1999 Indian Ocean cooling event. Geophys. Res. Lett., 28, 3717-3720. -Lloyd, I. D., and G. A. Vecchi, 2010: Submonthly Indian Ocean cooling events and their interaction with large-scale conditions. J. Climate, 23, 700-716. -Tozuka, T., T. Yokoi, and T. Yamagata, 2010: A modeling study of interannual variations of the Seychelles Dome. J. Geophys. Res., 115, C04005, doi:10.1029/2009JC005547.

  13. XPAT: a toolkit to conduct cross-platform association studies with heterogeneous sequencing datasets.

    PubMed

    Yu, Yao; Hu, Hao; Bohlender, Ryan J; Hu, Fulan; Chen, Jiun-Sheng; Holt, Carson; Fowler, Jerry; Guthery, Stephen L; Scheet, Paul; Hildebrandt, Michelle A T; Yandell, Mark; Huff, Chad D

    2018-04-06

    High-throughput sequencing data are increasingly being made available to the research community for secondary analyses, providing new opportunities for large-scale association studies. However, heterogeneity in target capture and sequencing technologies often introduce strong technological stratification biases that overwhelm subtle signals of association in studies of complex traits. Here, we introduce the Cross-Platform Association Toolkit, XPAT, which provides a suite of tools designed to support and conduct large-scale association studies with heterogeneous sequencing datasets. XPAT includes tools to support cross-platform aware variant calling, quality control filtering, gene-based association testing and rare variant effect size estimation. To evaluate the performance of XPAT, we conducted case-control association studies for three diseases, including 783 breast cancer cases, 272 ovarian cancer cases, 205 Crohn disease cases and 3507 shared controls (including 1722 females) using sequencing data from multiple sources. XPAT greatly reduced Type I error inflation in the case-control analyses, while replicating many previously identified disease-gene associations. We also show that association tests conducted with XPAT using cross-platform data have comparable performance to tests using matched platform data. XPAT enables new association studies that combine existing sequencing datasets to identify genetic loci associated with common diseases and other complex traits.

  14. The biomechanical demands of manual scaling on the shoulders & neck of dental hygienists.

    PubMed

    La Delfa, Nicholas J; Grondin, Diane E; Cox, Jocelyn; Potvin, Jim R; Howarth, Samuel J

    2017-01-01

    The purpose of this study was to evaluate the postural and muscular demands placed on the shoulders and neck of dental hygienists when performing a simulated manual scaling task. Nineteen healthy female dental hygienists performed 30-min of simulated manual scaling on a manikin head in a laboratory setting. Surface electromyography was used to monitor muscle activity from several neck and shoulder muscles, and neck and arm elevation kinematics were evaluated using motion capture. The simulated scaling task resulted in a large range of neck and arm elevation angles and excessive low-level muscular demands in the neck extensor and scapular stabilising muscles. The physical demands varied depending on the working position of the hygienists relative to the manikin head. These findings are valuable in guiding future ergonomics interventions aimed at reducing the physical exposures of dental hygiene work. Practitioner Summary: Given that this study evaluates the physical demands of manual scaling, a procedure that is fundamental to dental hygiene work, the findings are valuable to identify ergonomics interventions to reduce the prevalence of work-related injuries, disability and the potential for early retirement among this occupational group.

  15. Will COBE challenge the inflationary paradigm - Cosmic microwave background anisotropies versus large-scale streaming motions revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorski, K.M.

    1991-03-01

    The relation between cosmic microwave background (CMB) anisotropies and large-scale galaxy streaming motions is examined within the framework of inflationary cosmology. The minimal Sachs and Wolfe (1967) CMB anisotropies at large angular scales in the models with initial Harrison-Zel'dovich spectrum of inhomogeneity normalized to the local large-scale bulk flow, which are independent of the Hubble constant and specific nature of dark matter, are found to be within the anticipated ultimate sensitivity limits of COBE's Differential Microwave Radiometer experiment. For example, the most likely value of the quadrupole coefficient is predicted to be a2 not less than 7 x 10 tomore » the -6th, where equality applies to the limiting minimal model. If (1) COBE's DMR instruments perform well throughout the two-year period; (2) the anisotropy data are not marred by the systematic errors; (3) the large-scale motions retain their present observational status; (4) there is no statistical conspiracy in a sense of the measured bulk flow being of untypically high and the large-scale anisotropy of untypically low amplitudes; and (5) the low-order multipoles in the all-sky primordial fireball temperature map are not detected, the inflationary paradigm will have to be questioned. 19 refs.« less

  16. Structure of high and low shear-stress events in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Gomit, G.; de Kat, R.; Ganapathisubramani, B.

    2018-01-01

    Simultaneous particle image velocimetry (PIV) and wall-shear-stress sensor measurements were performed to study structures associated with shear-stress events in a flat plate turbulent boundary layer at a Reynolds number Reτ≈4000 . The PIV field of view covers 8 δ (where δ is the boundary layer thickness) along the streamwise direction and captures the entire boundary layer in the wall-normal direction. Simultaneously, wall-shear-stress measurements that capture the large-scale fluctuations were taken using a spanwise array of hot-film skin-friction sensors (spanning 2 δ ). Based on this combination of measurements, the organization of the conditional wall-normal and streamwise velocity fluctuations (u and v ) and of the Reynolds shear stress (-u v ) can be extracted. Conditional averages of the velocity field are computed by dividing the histogram of the large-scale wall-shear-stress fluctuations into four quartiles, each containing 25% of the occurrences. The conditional events corresponding to the extreme quartiles of the histogram (positive and negative) predominantly contribute to a change of velocity profile associated with the large structures and in the modulation of the small scales. A detailed examination of the Reynolds shear-stress contribution related to each of the four quartiles shows that the flow above a low wall-shear-stress event carries a larger amount of Reynolds shear stress than the other quartiles. The contribution of the small and large scales to this observation is discussed based on a scale decomposition of the velocity field.

  17. Towards the computation of time-periodic inertial range dynamics

    NASA Astrophysics Data System (ADS)

    van Veen, L.; Vela-Martín, A.; Kawahara, G.

    2018-04-01

    We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.

  18. The Effect of a State Department of Education Teacher Mentor Initiative on Science Achievement

    NASA Astrophysics Data System (ADS)

    Pruitt, Stephen L.; Wallace, Carolyn S.

    2012-06-01

    This study investigated the effectiveness of a southern state's department of education program to improve science achievement through embedded professional development of science teachers in the lowest performing schools. The Science Mentor Program provided content and inquiry-based coaching by teacher leaders to science teachers in their own classrooms. The study analyzed the mean scale scores for the science portion of the state's high school graduation test for the years 2004 through 2007 to determine whether schools receiving the intervention scored significantly higher than comparison schools receiving no intervention. The results showed that all schools achieved significant improvement of scale scores between 2004 and 2007, but there were no significant performance differences between intervention and comparison schools, nor were there any significant differences between various subgroups in intervention and comparison schools. However, one subgroup, economically disadvantaged (ED) students, from high-level intervention schools closed the achievement gap with ED students from no-intervention schools across the period of the study. The study provides important information to guide future research on and design of large-scale professional development programs to foster inquiry-based science.

  19. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Screening large-scale association study data: exploiting interactions using random forests.

    PubMed

    Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul

    2004-12-10

    Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.

  1. Biology-Inspired Distributed Consensus in Massively-Deployed Sensor Networks

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng

    2005-01-01

    Promises of ubiquitous control of the physical environment by large-scale wireless sensor networks open avenues for new applications that are expected to redefine the way we live and work. Most of recent research has concentrated on developing techniques for performing relatively simple tasks in small-scale sensor networks assuming some form of centralized control. The main contribution of this work is to propose a new way of looking at large-scale sensor networks, motivated by lessons learned from the way biological ecosystems are organized. Indeed, we believe that techniques used in small-scale sensor networks are not likely to scale to large networks; that such large-scale networks must be viewed as an ecosystem in which the sensors/effectors are organisms whose autonomous actions, based on local information, combine in a communal way to produce global results. As an example of a useful function, we demonstrate that fully distributed consensus can be attained in a scalable fashion in massively deployed sensor networks where individual motes operate based on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects.

  2. Computational study of culture conditions and nutrient supply in a hollow membrane sheet bioreactor for large-scale bone tissue engineering.

    PubMed

    Khademi, Ramin; Mohebbi-Kalhori, Davod; Hadjizadeh, Afra

    2014-03-01

    Successful bone tissue culture in a large implant is still a challenge. We have previously developed a porous hollow membrane sheet (HMSh) for tissue engineering applications (Afra Hadjizadeh and Davod Mohebbi-Kalhori, J Biomed. Mater. Res. Part A [2]). This study aims to investigate culture conditions and nutrient supply in a bioreactor made of HMSh. For this purpose, hydrodynamic and mass transport behavior in the newly proposed hollow membrane sheet bioreactor including a lumen region and porous membrane (scaffold) for supporting and feeding cells with a grooved section for accommodating gel-cell matrix was numerically studied. A finite element method was used for solving the governing equations in both homogenous and porous media. Furthermore, the cell resistance and waste production have been included in a 3D mathematical model. The influences of different bioreactor design parameters and the scaffold properties which determine the HMSh bioreactor performance and various operating conditions were discussed in detail. The obtained results illustrated that the novel scaffold can be employed in the large-scale applications in bone tissue engineering.

  3. Efficient analysis of large-scale genome-wide data with two R packages: bigstatsr and bigsnpr.

    PubMed

    Privé, Florian; Aschard, Hugues; Ziyatdinov, Andrey; Blum, Michael G B

    2017-03-30

    Genome-wide datasets produced for association studies have dramatically increased in size over the past few years, with modern datasets commonly including millions of variants measured in dozens of thousands of individuals. This increase in data size is a major challenge severely slowing down genomic analyses, leading to some software becoming obsolete and researchers having limited access to diverse analysis tools. Here we present two R packages, bigstatsr and bigsnpr, allowing for the analysis of large scale genomic data to be performed within R. To address large data size, the packages use memory-mapping for accessing data matrices stored on disk instead of in RAM. To perform data pre-processing and data analysis, the packages integrate most of the tools that are commonly used, either through transparent system calls to existing software, or through updated or improved implementation of existing methods. In particular, the packages implement fast and accurate computations of principal component analysis and association studies, functions to remove SNPs in linkage disequilibrium and algorithms to learn polygenic risk scores on millions of SNPs. We illustrate applications of the two R packages by analyzing a case-control genomic dataset for celiac disease, performing an association study and computing Polygenic Risk Scores. Finally, we demonstrate the scalability of the R packages by analyzing a simulated genome-wide dataset including 500,000 individuals and 1 million markers on a single desktop computer. https://privefl.github.io/bigstatsr/ & https://privefl.github.io/bigsnpr/. florian.prive@univ-grenoble-alpes.fr & michael.blum@univ-grenoble-alpes.fr. Supplementary materials are available at Bioinformatics online.

  4. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  5. Designing a Large-Scale Multilevel Improvement Initiative: The Improving Performance in Practice Program

    ERIC Educational Resources Information Center

    Margolis, Peter A.; DeWalt, Darren A.; Simon, Janet E.; Horowitz, Sheldon; Scoville, Richard; Kahn, Norman; Perelman, Robert; Bagley, Bruce; Miles, Paul

    2010-01-01

    Improving Performance in Practice (IPIP) is a large system intervention designed to align efforts and motivate the creation of a tiered system of improvement at the national, state, practice, and patient levels, assisting primary-care physicians and their practice teams to assess and measurably improve the quality of care for chronic illness and…

  6. The nonlinear Galerkin method: A multi-scale method applied to the simulation of homogeneous turbulent flows

    NASA Technical Reports Server (NTRS)

    Debussche, A.; Dubois, T.; Temam, R.

    1993-01-01

    Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.

  7. Thin Watts-Strogatz networks.

    PubMed

    de Moura, Alessandro P S

    2006-01-01

    A modified version of the Watts-Strogatz (WS) network model is proposed, in which the number of shortcuts scales with the network size N as Nalpha, with alpha < 1. In these networks, the ratio of the number of shortcuts to the network size approaches zero as N --> infinity, whereas in the original WS model, this ratio is constant. We call such networks "thin Watts-Strogatz networks." We show that even though the fraction of shortcuts becomes vanishingly small for large networks, they still cause a kind of small-world effect, in the sense that the length L of the network increases sublinearly with the size. We develop a mean-field theory for these networks, which predicts that the length scales as N1-alpha ln N for large N. We also study how a search using only local information works in thin WS networks. We find that the search performance is enhanced compared to the regular network, and we predict that the search time tau scales as N1-alpha/2. These theoretical results are tested using numerical simulations. We comment on the possible relevance of thin WS networks for the design of high-performance low-cost communication networks.

  8. Single cell versus large population analysis: cell variability in elemental intracellular concentration and distribution.

    PubMed

    Malucelli, Emil; Procopio, Alessandra; Fratini, Michela; Gianoncelli, Alessandra; Notargiacomo, Andrea; Merolle, Lucia; Sargenti, Azzurra; Castiglioni, Sara; Cappadone, Concettina; Farruggia, Giovanna; Lombardo, Marco; Lagomarsino, Stefano; Maier, Jeanette A; Iotti, Stefano

    2018-01-01

    The quantification of elemental concentration in cells is usually performed by analytical assays on large populations missing peculiar but important rare cells. The present article aims at comparing the elemental quantification in single cells and cell population in three different cell types using a new approach for single cells elemental analysis performed at sub-micrometer scale combining X-ray fluorescence microscopy and atomic force microscopy. The attention is focused on the light element Mg, exploiting the opportunity to compare the single cell quantification to the cell population analysis carried out by a highly Mg-selective fluorescent chemosensor. The results show that the single cell analysis reveals the same Mg differences found in large population of the different cell strains studied. However, in one of the cell strains, single cell analysis reveals two cells with an exceptionally high intracellular Mg content compared with the other cells of the same strain. The single cell analysis allows mapping Mg and other light elements in whole cells at sub-micrometer scale. A detailed intensity correlation analysis on the two cells with the highest Mg content reveals that Mg subcellular localization correlates with oxygen in a different fashion with respect the other sister cells of the same strain. Graphical abstract Single cells or large population analysis this is the question!

  9. Examining related influential factors for dental calculus scaling utilization among people with disabilities in Taiwan, a nationwide population-based study.

    PubMed

    Lai, Hsien-Tang; Kung, Pei-Tseng; Su, Hsun-Pi; Tsai, Wen-Chen

    2014-09-01

    Limited studies with large samples have been conducted on the utilization of dental calculus scaling among people with physical or mental disabilities. This study aimed to investigate the utilization of dental calculus scaling among the national disabled population. This study analyzed the utilization of dental calculus scaling among the disabled people, using the nationwide data between 2006 and 2008. Descriptive analysis and logistic regression were performed to analyze related influential factors for dental calculus scaling utilization. The dental calculus scaling utilization rate among people with physical or mental disabilities was 16.39%, and the annual utilization frequency was 0.2 times. Utilization rate was higher among the female and non-aboriginal samples. Utilization rate decreased with increased age and disability severity while utilization rate increased with income, education level, urbanization of residential area and number of chronic illnesses. Related influential factors for dental calculus scaling utilization rate were gender, age, ethnicity (aboriginal or non-aboriginal), education level, urbanization of residence area, income, catastrophic illnesses, chronic illnesses, disability types, and disability severity significantly influenced the dental calculus scaling utilization rate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Reducing graphene device variability with yttrium sacrificial layers

    NASA Astrophysics Data System (ADS)

    Wang, Ning C.; Carrion, Enrique A.; Tung, Maryann C.; Pop, Eric

    2017-05-01

    Graphene technology has made great strides since the material was isolated more than a decade ago. However, despite improvements in growth quality and numerous "hero" devices, challenges of uniformity remain, restricting the large-scale development of graphene-based technologies. Here, we investigate and reduce the variability of graphene transistors by studying the effects of contact metals (with and without a Ti layer), resist, and yttrium (Y) sacrificial layers during the fabrication of hundreds of devices. We find that with optical photolithography, residual resist and process contamination are unavoidable, ultimately limiting the device performance and yield. However, using Y sacrificial layers to isolate the graphene from processing conditions improves the yield (from 73% to 97%), the average device performance (three-fold increase of mobility and 58% lower contact resistance), and the device-to-device variability (standard deviation of Dirac voltage reduced by 20%). In contrast to other sacrificial layer techniques, the removal of the Y sacrificial layer with dilute HCl does not harm surrounding materials, simplifying large-scale graphene fabrication.

  12. The analysis of MAI in large scale MIMO-CDMA system

    NASA Astrophysics Data System (ADS)

    Berceanu, Madalina-Georgiana; Voicu, Carmen; Halunga, Simona

    2016-12-01

    Recently, technological development imposed a rapid growth in the use of data carried by cellular services, which also implies the necessity of higher data rates and lower latency. To meet the users' demands, it was brought into discussion a series of new data processing techniques. In this paper, we approached the MIMO technology that uses multiple antennas at the receiver and transmitter ends. To study the performances obtained by this technology, we proposed a MIMO-CDMA system, where image transmission has been used instead of random data transmission to take benefit of a larger range of quality indicators. In the simulations we increased the number of antennas, we observed how the performances of the system are modified and, based on that, we were able to make a comparison between a conventional MIMO and a Large Scale MIMO system, in terms of BER and MSSIM index, which is a metric that compares the quality of the image before transmission with the received one.

  13. Experience with specifications applicable to certification. [of photovoltaic modules for large-scale application

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1982-01-01

    The Jet Propulsion Laboratory has developed a number of photovoltaic test and measurement specifications to guide the development of modules toward the requirements of future large-scale applications. Experience with these specifications and the extensive module measurement and testing that has accompanied their use is examined. Conclusions are drawn relative to three aspects of product certification: performance measurement, endurance testing and safety evaluation.

  14. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    NASA Astrophysics Data System (ADS)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  15. How the Website Usability Elements Impact Performance

    NASA Astrophysics Data System (ADS)

    Aljukhadar, Muhammad; Senecal, Sylvain

    This research builds on the results of a large scale study in which participants performed an informational task on one of 59 websites spanning various industries to examine how the website usability elements (graphical attractiveness, information, interactivity, trust, and ease of use) drive users’ attitudes and intentions toward the website and how these effects vary according to site experience and end product tangibility. Results show that while the effects of site interactivity and graphical attractiveness were more influential for services sites, the effects of site information and trust were stronger for tangibles sites. Alternatively, compared to returning site visitors, first-time visitors perceived the website as less easy to use, needed more time to accomplish the online task, and based positive attitudes and intentions more strongly on the site information and interactivity. The results of a second study performed in a proximate culture largely corroborate these findings.

  16. Fine-Scale Microclimatic Variation Can Shape the Responses of Organisms to Global Change in Both Natural and Urban Environments.

    PubMed

    Pincebourde, Sylvain; Murdock, Courtney C; Vickers, Mathew; Sears, Michael W

    2016-07-01

    When predicting the response of organisms to global change, models use measures of climate at a coarse resolution from general circulation models or from downscaled regional models. Organisms, however, do not experience climate at such large scales. The climate heterogeneity over a landscape and how much of that landscape an organism can sample will determine ultimately the microclimates experienced by organisms. This past few decades has seen an important increase in the number of studies reporting microclimatic patterns at small scales. This synthesis intends to unify studies reporting microclimatic heterogeneity (mostly temperature) at various spatial scales, to infer any emerging trends, and to discuss the causes and consequences of such heterogeneity for organismal performance and with respect to changing land use patterns and climate. First, we identify the environmental drivers of heterogeneity across the various spatial scales that are pertinent to ectotherms. The thermal heterogeneity at the local and micro-scales is mostly generated by the architecture or the geometrical features of the microhabitat. Then, the thermal heterogeneity experienced by individuals is modulated by behavior. Second, we survey the literature to quantify thermal heterogeneity from the micro-scale up to the scale of a landscape in natural habitats. Despite difficulties in compiling studies that differ much in their design and aims, we found that there is as much thermal heterogeneity across micro-, local and landscape scales, and that the temperature range is large in general (>9 °C on average, and up to 26 °C). Third, we examine the extent to which urban habitats can be used to infer the microclimatic patterns of the future. Urban areas generate globally drier and warmer microclimatic patterns and recent evidence suggest that thermal traits of ectotherms are adapted to them. Fourth, we explore the interplay between microclimate heterogeneity and the behavioral thermoregulatory abilities of ectotherms in setting their overall performance. We used a random walk framework to show that the thermal heterogeneity allows a more precise behavioral thermoregulation and a narrower temperature distribution of the ectotherm compared to less heterogeneous microhabitats. Finally, we discuss the potential impacts of global change on the fine scale mosaics of microclimates. The amplitude of change may differ between spatial scales. In heterogeneous microhabitats, the amplitude of change at micro-scale, caused by atmospheric warming, can be substantial while it can be limited at the local and landscape scales. We suggest that the warming signal will influence species performance and biotic interactions by modulating the mosaic of microclimates. © The Author 2016. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  17. How Many Are Enough? A Quantitative Analysis of the Effects of the Number of Response Options on the Academic Performance of Students with Disabilities on Large-Scale Assessments

    ERIC Educational Resources Information Center

    Freeman, Sarah Reives

    2013-01-01

    The main focus of this study is to determine the effect of test design on the academic performance of students with disabilities participating in the NCEXTEND2 modified assessment program during the 2010-2011 school year. Participation of all students in state and federal accountability measure is required by No Child Left Behind (2001) and the…

  18. Genome wide analysis of flowering time trait in multiple environments via high-throughput genotyping technique in Brassica napus L.

    PubMed

    Li, Lun; Long, Yan; Zhang, Libin; Dalton-Morgan, Jessica; Batley, Jacqueline; Yu, Longjiang; Meng, Jinling; Li, Maoteng

    2015-01-01

    The prediction of the flowering time (FT) trait in Brassica napus based on genome-wide markers and the detection of underlying genetic factors is important not only for oilseed producers around the world but also for the other crop industry in the rotation system in China. In previous studies the low density and mixture of biomarkers used obstructed genomic selection in B. napus and comprehensive mapping of FT related loci. In this study, a high-density genome-wide SNP set was genotyped from a double-haploid population of B. napus. We first performed genomic prediction of FT traits in B. napus using SNPs across the genome under ten environments of three geographic regions via eight existing genomic predictive models. The results showed that all the models achieved comparably high accuracies, verifying the feasibility of genomic prediction in B. napus. Next, we performed a large-scale mapping of FT related loci among three regions, and found 437 associated SNPs, some of which represented known FT genes, such as AP1 and PHYE. The genes tagged by the associated SNPs were enriched in biological processes involved in the formation of flowers. Epistasis analysis showed that significant interactions were found between detected loci, even among some known FT related genes. All the results showed that our large scale and high-density genotype data are of great practical and scientific values for B. napus. To our best knowledge, this is the first evaluation of genomic selection models in B. napus based on a high-density SNP dataset and large-scale mapping of FT loci.

  19. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    NASA Astrophysics Data System (ADS)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  20. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  1. Varying Herbivore Population Structure Correlates with Lack of Local Adaptation in a Geographic Variable Plant-Herbivore Interaction

    PubMed Central

    Cogni, Rodrigo; Trigo, José R.; Futuyma, Douglas J.

    2011-01-01

    Local adaptation of parasites to their hosts due to coevolution is a central prediction of many theories in evolutionary biology. However, empirical studies looking for parasite local adaptation show great variation in outcomes, and the reasons for such variation are largely unknown. In a previous study, we showed adaptive differentiation in the arctiid moth Utetheisa ornatrix to its host plant, the pyrrolizidine alkaloid-bearing legume Crotalaria pallida, at the continental scale, but found no differentiation at the regional scale. In the present study, we sampled the same sites to investigate factors that may contribute to the lack of differentiation at the regional scale. We performed field observations that show that specialist and non-specialist polyphagous herbivore incidence varies among populations at both scales. With a series of common-garden experiments we show that some plant traits that may affect herbivory (pyrrolizidine alkaloids and extrafloral nectaries) vary at the regional scale, while other traits (trichomes and nitrogen content) just vary at the continental scale. These results, combined with our previous evidence for plant population differentiation based on larval performance on fresh fruits, suggest that U. ornatrix is subjected to divergent selection even at the regional scale. Finally, with a microsatellite study we investigated population structure of U. ornatrix. We found that population structure is not stable over time: we found population differentiation at the regional scale in the first year of sampling, but not in the second year. Unstable population structure of the herbivore is the most likely cause of the lack of regional adaptation. PMID:22220208

  2. Centrifuge impact cratering experiments: Scaling laws for non-porous targets

    NASA Technical Reports Server (NTRS)

    Schmidt, Robert M.

    1987-01-01

    This research is a continuation of an ongoing program whose objective is to perform experiments and to develop scaling relationships for large body impacts onto planetary surfaces. The development of the centrifuge technique has been pioneered by the present investigator and is used to provide experimental data for actual target materials of interest. With both powder and gas guns mounted on a rotor arm, it is possible to match various dimensionless similarity parameters, which have been shown to govern the behavior of large scale impacts. Current work is directed toward the determination of scaling estimates for nonporous targets. The results are presented in summary form.

  3. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  4. A global classification of coastal flood hazard climates associated with large-scale oceanographic forcing.

    PubMed

    Rueda, Ana; Vitousek, Sean; Camus, Paula; Tomás, Antonio; Espejo, Antonio; Losada, Inigo J; Barnard, Patrick L; Erikson, Li H; Ruggiero, Peter; Reguero, Borja G; Mendez, Fernando J

    2017-07-11

    Coastal communities throughout the world are exposed to numerous and increasing threats, such as coastal flooding and erosion, saltwater intrusion and wetland degradation. Here, we present the first global-scale analysis of the main drivers of coastal flooding due to large-scale oceanographic factors. Given the large dimensionality of the problem (e.g. spatiotemporal variability in flood magnitude and the relative influence of waves, tides and surge levels), we have performed a computer-based classification to identify geographical areas with homogeneous climates. Results show that 75% of coastal regions around the globe have the potential for very large flooding events with low probabilities (unbounded tails), 82% are tide-dominated, and almost 49% are highly susceptible to increases in flooding frequency due to sea-level rise.

  5. An outdoor test facility for the large-scale production of microalgae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, D.A.; Weissman, J.; Goebel, R.

    The goal of the US Department of EnergySolar Energy Research Institute's Aquatic Species Program is to develop the technology base to produce liquid fuels from microalgae. This technology is being initially developed for the desert Southwest. As part of this program an outdoor test facility has been designed and constructed in Roswell, New Mexico. The site has a large existing infrastructure, a suitable climate, and abundant saline groundwater. This facility will be used to evaluate productivity of microalgae strains and conduct large-scale experiments to increase biomass productivity while decreasing production costs. Six 3-m/sup 2/ fiberglass raceways were constructed. Several microalgaemore » strains were screened for growth, one of which had a short-term productivity rate of greater than 50 g dry wt m/sup /minus/2/ d/sup /minus/1/. Two large-scale, 0.1-ha raceways have also been built. These are being used to evaluate the performance trade-offs between low-cost earthen liners and higher cost plastic liners. A series of hydraulic measurements is also being carried out to evaluate future improved pond designs. Future plans include a 0.5-ha pond, which will be built in approximately 2 years to test a scaled-up system. This unique facility will be available to other researchers and industry for studies on microalgae productivity. 6 refs., 9 figs., 1 tab.« less

  6. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  7. Understanding Atmospheric Anomalies Associated With Seasonal Pluvial-Drought Processes Using Southwest China as an Example

    NASA Astrophysics Data System (ADS)

    Liu, Zhenchen; Lu, Guihua; He, Hai; Wu, Zhiyong; He, Jian

    2017-11-01

    Seasonal pluvial-drought transition processes are unique natural phenomena. To explore possible mechanisms, we considered Southwest China (SWC) as the study region and comprehensively investigated the temporal evolution or spatial patterns of large-scale and regional atmospheric variables with the simple method of Standardized Anomalies (SA). Some key procedures and results include the following: (1) Because regional atmospheric variables are more directly responsible for the transition processes, we investigate it in detail. The temporal evolution of net vertical integral water vapor flux (net VIWVF) across SWC, together with vertical SA-based patterns of regional horizontal divergence (D) and vertical motion (ω), coincides well with pluvial-drought transition processes. (2) With respect to large-scale circulation patterns, a well-organized Eurasian (EU) Pattern is one important feature during the pluvial-drought transitions over SWC. (3) Based on these large-scale and regional atmospheric anomalous features, relevant SA-based indices were built, to explore the possibility of simulating drought development using previous pluvial anomalies. As a whole, simulated drought development only with SA-based indices of large-scale circulation patterns does not perform well. Further, it can be improved a lot when SA-based indices of regional D and net VIWVF are introduced. (4) In addition, the potential drought prediction using pluvial anomalies, together with the deep understanding of physical mechanisms responsible for pluvial-drought transitions, need to be further explored.

  8. Recovery of Large Angular Scale CMB Polarization for Instruments Employing Variable-Delay Polarization Modulators

    NASA Technical Reports Server (NTRS)

    Miller, N. J.; Chuss, D. T.; Marriage, T. A.; Wollack, E. J.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Fixsen, D. J.; Harrington, K.; hide

    2016-01-01

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/ f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r= 0.01. Indeed, r less than 0.01 is achievable with commensurately improved characterizations and controls.

  9. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  10. Implementation of a multi-threaded framework for large-scale scientific applications

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...

    2015-05-22

    The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less

  11. Large-scale investigation of the parameters in response to Eimeria maxima challenge in broilers.

    PubMed

    Hamzic, E; Bed'Hom, B; Juin, H; Hawken, R; Abrahamsen, M S; Elsen, J M; Servin, B; Pinard-van der Laan, M H; Demeure, O

    2015-04-01

    Coccidiosis, a parasitic disease of the intestinal tract caused by members of the genera Eimeria and Isospora, is one of the most common and costly diseases in chicken. The aims of this study were to assess the effect of the challenge and level of variability of measured parameters in chickens during the challenge with Eimeria maxima. Furthermore, this study aimed to investigate which parameters are the most relevant indicators of the health status. Finally, the study also aimed to estimate accuracy of prediction for traits that cannot be measured on large scale (such as intestinal lesion score and fecal oocyst count) using parameters that can easily be measured on all animals. The study was performed in 2 parts: a pilot challenge on 240 animals followed by a large-scale challenge on 2,024 animals. In both experiments, animals were challenged with 50,000 Eimeria maxima oocysts at 16 d of age. In the pilot challenge, all animals were measured for BW gain, plasma coloration, hematocrit, and rectal temperature and, in addition, a subset of 48 animals was measured for oocyst count and the intestinal lesion score. All animals from the second challenge were measured for BW gain, plasma coloration, and hematocrit whereas a subset of 184 animals was measured for intestinal lesion score, fecal oocyst count, blood parameters, and plasma protein content and composition. Most of the parameters measured were significantly affected by the challenge. Lesion scores for duodenum and jejunum (P < 0.001), oocyst count (P < 0.05), plasma coloration for the optical density values between 450 and 490 nm (P < 0.001), albumin (P < 0.001), α1-globulin (P < 0.01), α2-globulin (P < 0.001), α3-globulin (P < 0.01), and β2-globulin (P < 0.001) were the most strongly affected parameters and expressed the greatest levels of variation. Plasma protein profiles proved to be a new, reliable parameter for measuring response to Eimeria maxima. Prediction of intestinal lesion score and fecal oocyst count using the other parameters measured was not very precise (R2 < 0.7). The study was successfully performed in real raising conditions on a large scale. Finally, we observed a high variability in response to the challenge, suggesting that broilers' response to Eimeria maxima has a strong genetic determinism, which may be improved by genetic selection.

  12. Combining Multilevel Analysis with National Value-Added Data Sets - A Case Study to Explore the Effects of School Diversity

    ERIC Educational Resources Information Center

    Schagen, Ian; Schagen, Sandie

    2005-01-01

    The advent of large-scale matched data sets, linking pupils' attainment across key stages, gives new opportunities to explore the effects of school organisational factors on pupil performance. Combined with currently available sophisticated and efficient software for multilevel analysis, it offers educational researchers the chance to develop…

  13. The Southern and Eastern Africa Consortium for Monitoring Educational Quality. Assessment GEMs No. 8

    ERIC Educational Resources Information Center

    Australian Council for Educational Research, 2015

    2015-01-01

    The Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ) carries out large-scale cross-national research studies in member countries in the Southern and Eastern Africa region. It aims to assess the conditions of schooling and performance levels of learners and teachers in the areas of literacy and numeracy. SACMEQ has…

  14. Gender Differences in Processing Speed: A Review of Recent Research

    ERIC Educational Resources Information Center

    Roivainen, Eka

    2011-01-01

    A review of recent large-scale studies on gender differences in processing speed and on the cognitive factors assumed to affect processing speed was performed. It was found that females have an advantage in processing speed tasks involving digits and alphabets as well as in rapid naming tasks while males are faster on reaction time tests and…

  15. Lexical Difficulty--Using Elicited Imitation to Study Child L2

    ERIC Educational Resources Information Center

    Campfield, Dorota E.

    2017-01-01

    This paper reports a post-hoc analysis of the influence of lexical difficulty of cue sentences on performance in an elicited imitation (EI) task to assess oral production skills for 645 child L2 English learners in instructional settings. This formed part of a large-scale investigation into effectiveness of foreign language teaching in Polish…

  16. THE EFFECT OF CHLORIDE AND ORTHOPHOSPHATE ON THE RELEASE OF IRON FROM DRINKING WATER DISTRIBUTION SYSTEM CAST IRON MAIN

    EPA Science Inventory

    “Colored water” resulting from suspended iron particles is a common drinking water consumer complaint which is largely impacted by water chemistry. A bench scale study, performed on a 90 year-old corroded cast-iron pipe section removed from a drinking water distribution system, w...

  17. Prediction of monthly rainfall on homogeneous monsoon regions of India based on large scale circulation patterns using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Kashid, Satishkumar S.; Maity, Rajib

    2012-08-01

    SummaryPrediction of Indian Summer Monsoon Rainfall (ISMR) is of vital importance for Indian economy, and it has been remained a great challenge for hydro-meteorologists due to inherent complexities in the climatic systems. The Large-scale atmospheric circulation patterns from tropical Pacific Ocean (ENSO) and those from tropical Indian Ocean (EQUINOO) are established to influence the Indian Summer Monsoon Rainfall. The information of these two large scale atmospheric circulation patterns in terms of their indices is used to model the complex relationship between Indian Summer Monsoon Rainfall and the ENSO as well as EQUINOO indices. However, extracting the signal from such large-scale indices for modeling such complex systems is significantly difficult. Rainfall predictions have been done for 'All India' as one unit, as well as for five 'homogeneous monsoon regions of India', defined by Indian Institute of Tropical Meteorology. Recent 'Artificial Intelligence' tool 'Genetic Programming' (GP) has been employed for modeling such problem. The Genetic Programming approach is found to capture the complex relationship between the monthly Indian Summer Monsoon Rainfall and large scale atmospheric circulation pattern indices - ENSO and EQUINOO. Research findings of this study indicate that GP-derived monthly rainfall forecasting models, that use large-scale atmospheric circulation information are successful in prediction of All India Summer Monsoon Rainfall with correlation coefficient as good as 0.866, which may appears attractive for such a complex system. A separate analysis is carried out for All India Summer Monsoon rainfall for India as one unit, and five homogeneous monsoon regions, based on ENSO and EQUINOO indices of months of March, April and May only, performed at end of month of May. In this case, All India Summer Monsoon Rainfall could be predicted with 0.70 as correlation coefficient with somewhat lesser Correlation Coefficient (C.C.) values for different 'homogeneous monsoon regions'.

  18. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

    PubMed Central

    Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.

    2014-01-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545

  19. A practical overview and comparison of certain commercial forensic software tools for processing large-scale digital investigations

    NASA Astrophysics Data System (ADS)

    Kröger, Knut; Creutzburg, Reiner

    2013-05-01

    The aim of this paper is to show the usefulness of modern forensic software tools for processing large-scale digital investigations. In particular, we focus on the new version of Nuix 4.2 and compare it with AccessData FTK 4.2, X-Ways Forensics 16.9 and Guidance Encase Forensic 7 regarding its performance, functionality, usability and capability. We will show how these software tools work with large forensic images and how capable they are in examining complex and big data scenarios.

  20. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.

Top