Sample records for extreme scale computing

  1. A characterization of workflow management systems for extreme-scale applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  2. A characterization of workflow management systems for extreme-scale applications

    DOE PAGES

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...

    2017-02-16

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  3. Extreme-Scale Computing Project Aims to Advance Precision Oncology | FNLCR Staging

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  4. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  5. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  6. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  7. Extreme-Scale Computing Project Aims to Advance Precision Oncology | Frederick National Laboratory for Cancer Research

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  8. Extreme-Scale Computing Project Aims to Advance Precision Oncology | Poster

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict drug response, and improve treatments for patients.

  9. Gravitational Waves From the Kerr/CFT Correspondence

    NASA Astrophysics Data System (ADS)

    Porfyriadis, Achilleas

    Astronomical observation suggests the existence of near-extreme Kerr black holes in the sky. Properties of diffeomorphisms imply that dynamics of the near-horizon region of near-extreme Kerr are governed by an infinite-dimensional conformal symmetry. This symmetry may be exploited to analytically, rather than numerically, compute a variety of potentially observable processes. In this thesis we compute the gravitational radiation emitted by a small compact object that orbits in the near-horizon region and plunges into the horizon of a large rapidly rotating black hole. We study the holographically dual processes in the context of the Kerr/CFT correspondence and find our conformal field theory (CFT) computations in perfect agreement with the gravity results. We compute the radiation emitted by a particle on the innermost stable circular orbit (ISCO) of a rapidly spinning black hole. We confirm previous estimates of the overall scaling of the power radiated, but show that there are also small oscillations all the way to extremality. Furthermore, we reveal an intricate mode-by-mode structure in the flux to infinity, with only certain modes having the dominant scaling. The scaling of each mode is controlled by its conformal weight. Massive objects in adiabatic quasi-circular inspiral towards a near-extreme Kerr black hole quickly plunge into the horizon after passing the ISCO. The post-ISCO plunge trajectory is shown to be related by a conformal map to a circular orbit. Conformal symmetry of the near-horizon region is then used to compute analytically the gravitational radiation produced during the plunge phase. Most extreme-mass-ratio-inspirals of small compact objects into supermassive black holes end with a fast plunge from an eccentric last stable orbit. We use conformal transformations to analytically solve for the radiation emitted from various fast plunges into extreme and near-extreme Kerr black holes.

  10. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  11. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  12. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  13. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  14. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  15. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  16. The science of visual analysis at extreme scale

    NASA Astrophysics Data System (ADS)

    Nowell, Lucy T.

    2011-01-01

    Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.

  17. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geveci, Berk; Maynard, Robert

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less

  18. Upper extremity pain and computer use among engineering graduate students.

    PubMed

    Schlossberg, Eric B; Morrow, Sandra; Llosa, Augusto E; Mamary, Edward; Dietrich, Peter; Rempel, David M

    2004-09-01

    The objective of this study was to investigate risk factors associated with persistent or recurrent upper extremity and neck pain among engineering graduate students. A random sample of 206 Electrical Engineering and Computer Science (EECS) graduate students at a large public university completed an online questionnaire. Approximately 60% of respondents reported upper extremity or neck pain attributed to computer use and reported a mean pain severity score of 4.5 (+/-2.2; scale 0-10). In a final logistic regression model, female gender, years of computer use, and hours of computer use per week were significantly associated with pain. The high prevalence of upper extremity pain reported by graduate students suggests a public health need to identify interventions that will reduce symptom severity and prevent impairment.

  19. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  20. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  1. The future of scientific workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Peterka, Tom; Altintas, Ilkay

    Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on thosemore » workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.« less

  2. Extreme-scale Algorithms and Solver Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less

  3. dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.

  4. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  5. Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Lauer, Frank

    This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less

  6. Extreme-Scale De Novo Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georganas, Evangelos; Hofmeyr, Steven; Egan, Rob

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and themore » large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.« less

  7. Effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders among computer workers: a randomized controlled trial.

    PubMed

    Esmaeilzadeh, Sina; Ozcan, Emel; Capan, Nalan

    2014-01-01

    The aim of the study was to determine effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders (WUEMSDs) among computer workers. Four hundred computer workers answered a questionnaire on work-related upper extremity musculoskeletal symptoms (WUEMSS). Ninety-four subjects with WUEMSS using computers at least 3 h a day participated in a prospective, randomized controlled 6-month intervention. Body posture and workstation layouts were assessed by the Ergonomic Questionnaire. We used the Visual Analogue Scale to assess the intensity of WUEMSS. The Upper Extremity Function Scale was used to evaluate functional limitations at the neck and upper extremities. Health-related quality of life was assessed with the Short Form-36. After baseline assessment, those in the intervention group participated in a multicomponent ergonomic intervention program including a comprehensive ergonomic training consisting of two interactive sessions, an ergonomic training brochure, and workplace visits with workstation adjustments. Follow-up assessment was conducted after 6 months. In the intervention group, body posture (p < 0.001) and workstation layout (p = 0.002) improved over 6 months; furthermore, intensity (p < 0.001), duration (p < 0.001), and frequency (p = 0.009) of WUEMSS decreased significantly in the intervention group compared with the control group. Additionally, the functional status (p = 0.001), and physical (p < 0.001), and mental (p = 0.035) health-related quality of life improved significantly compared with the controls. There was no improvement of work day loss due to WUEMSS (p > 0.05). Ergonomic intervention programs may be effective in reducing ergonomic risk factors among computer workers and consequently in the secondary prevention of WUEMSDs.

  8. xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina

    Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less

  9. xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit

    DOE PAGES

    Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina; ...

    2017-03-01

    Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less

  10. NERSC News

    Science.gov Websites

    Performance Data, Analytics & Services Job Logs & Statistics Training & Tutorials Software Outages NERSC Training Spectrum Scale User Group Meeting Live Status Now Computing Queue Look MOTD » Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale March 29, 2018

  11. Cosmological neutrino simulations at extreme scale

    DOE PAGES

    Emberson, J. D.; Yu, Hao-Ran; Inman, Derek; ...

    2017-08-01

    Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method ofmore » data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.« less

  12. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  13. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  14. Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patchett, John M; Ahrens, James P; Lo, Li - Ta

    2010-10-15

    Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less

  15. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  16. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  17. Use of computer games as an intervention for stroke.

    PubMed

    Proffitt, Rachel M; Alankus, Gazihan; Kelleher, Caitlin L; Engsberg, Jack R

    2011-01-01

    Current rehabilitation for persons with hemiparesis after stroke requires high numbers of repetitions to be in accordance with contemporary motor learning principles. The motivational characteristics of computer games can be harnessed to create engaging interventions for persons with hemiparesis after stroke that incorporate this high number of repetitions. The purpose of this case report was to test the feasibility of using computer games as a 6-week home therapy intervention to improve upper extremity function for a person with stroke. One person with left upper extremity hemiparesis after stroke participated in a 6-week home therapy computer game intervention. The games were customized to her preferences and abilities and modified weekly. Her performance was tracked and analyzed. Data from pre-, mid-, and postintervention testing using standard upper extremity measures and the Reaching Performance Scale (RPS) were analyzed. After 3 weeks, the participant demonstrated increased upper extremity range of motion at the shoulder and decreased compensatory trunk movements during reaching tasks. After 6 weeks, she showed functional gains in activities of daily living (ADLs) and instrumental ADLs despite no further improvements on the RPS. Results indicate that computer games have the potential to be a useful intervention for people with stroke. Future work will add additional support to quantify the effectiveness of the games as a home therapy intervention for persons with stroke.

  18. Army Maneuver Center of Excellence

    DTIC Science & Technology

    2012-10-18

    agreements throughout DoD DARPA, JIEDDO, DHS, FAA, DoE, NSA , NASA, SMDC, etc. Strategic Partnerships Benefit the Army Materiel Enterprise External... Neuroscience Network Sciences Hierarchical Computing Extreme Energy Science Autonomous Systems Technology Emerging Sciences Meso-scale (grain...scales • Improvements in Soldier-system overall performance → operational neuroscience and advanced simulation and training technologies

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  20. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilke, Jeremiah J; Kenny, Joseph P.

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less

  1. Comprehensive Materials and Morphologies Study of Ion Traps (COMMIT) for Scalable Quantum Computation

    DTIC Science & Technology

    2012-04-21

    the photoelectric effect. The typical shortest wavelengths needed for ion traps range from 194 nm for Hg+ to 493 nm for Ba +, corresponding to 6.4-2.5...REPORT Comprehensive Materials and Morphologies Study of Ion Traps (COMMIT) for scalable Quantum Computation - Final Report 14. ABSTRACT 16. SECURITY...CLASSIFICATION OF: Trapped ion systems, are extremely promising for large-scale quantum computation, but face a vexing problem, with motional quantum

  2. (U) Status of Trinity and Crossroads Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Billy Joe; Lujan, James Westley; Hemmert, K. S.

    2017-01-10

    (U) This paper provides a general overview of current and future plans for the Advanced Simulation and Computing (ASC) Advanced Technology (AT) systems fielded by the New Mexico Alliance for Computing at Extreme Scale (ACES), a collaboration between Los Alamos Laboratory and Sandia National Laboratories. Additionally, this paper touches on research of technology beyond traditional CMOS. The status of Trinity, ASCs first AT system, and Crossroads, anticipated to succeed Trinity as the third AT system in 2020 will be presented, along with initial performance studies of the Intel Knights Landing Xeon Phi processors, introduced on Trinity. The challenges and opportunitiesmore » for our production simulation codes on AT systems will also be discussed. Trinity and Crossroads are a joint procurement by ACES and Lawrence Berkeley Laboratory as part of the Alliance for application Performance at EXtreme scale (APEX) http://apex.lanl.gov.« less

  3. Some issues related to the novel spectral acceleration method for the fast computation of radiation/scattering from one-dimensional extremely large scale quasi-planar structures

    NASA Astrophysics Data System (ADS)

    Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng

    2002-03-01

    The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  5. Computational data sciences for assessment and prediction of climate extremes

    NASA Astrophysics Data System (ADS)

    Ganguly, A. R.

    2011-12-01

    Climate extremes may be defined inclusively as severe weather events or large shifts in global or regional weather patterns which may be caused or exacerbated by natural climate variability or climate change. This area of research arguably represents one of the largest knowledge-gaps in climate science which is relevant for informing resource managers and policy makers. While physics-based climate models are essential in view of non-stationary and nonlinear dynamical processes, their current pace of uncertainty reduction may not be adequate for urgent stakeholder needs. The structure of the models may in some cases preclude reduction of uncertainty for critical processes at scales or for the extremes of interest. On the other hand, methods based on complex networks, extreme value statistics, machine learning, and space-time data mining, have demonstrated significant promise to improve scientific understanding and generate enhanced predictions. When combined with conceptual process understanding at multiple spatiotemporal scales and designed to handle massive data, interdisciplinary data science methods and algorithms may complement or supplement physics-based models. Specific examples from the prior literature and our ongoing work suggests how data-guided improvements may be possible, for example, in the context of ocean meteorology, climate oscillators, teleconnections, and atmospheric process understanding, which in turn can improve projections of regional climate, precipitation extremes and tropical cyclones in an useful and interpretable fashion. A community-wide effort is motivated to develop and adapt computational data science tools for translating climate model simulations to information relevant for adaptation and policy, as well as for improving our scientific understanding of climate extremes from both observed and model-simulated data.

  6. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  7. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  8. Examining Extreme Events Using Dynamically Downscaled 12-km WRF Simulations

    EPA Science Inventory

    Continued improvements in the speed and availability of computational resources have allowed dynamical downscaling of global climate model (GCM) projections to be conducted at increasingly finer grid scales and over extended time periods. The implementation of dynamical downscal...

  9. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  10. Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.

    PubMed

    Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L

    2002-09-01

    We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.

  11. The spatial return level of aggregated hourly extreme rainfall in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Shaffie, Mardhiyyah; Eli, Annazirin; Wan Zin, Wan Zawiah; Jemain, Abdul Aziz

    2015-07-01

    This paper is intended to ascertain the spatial pattern of extreme rainfall distribution in Peninsular Malaysia at several short time intervals, i.e., on hourly basis. Motivation of this research is due to historical records of extreme rainfall in Peninsular Malaysia, whereby many hydrological disasters at this region occur within a short time period. The hourly periods considered are 1, 2, 3, 6, 12, and 24 h. Many previous hydrological studies dealt with daily rainfall data; thus, this study enables comparison to be made on the estimated performances between daily and hourly rainfall data analyses so as to identify the impact of extreme rainfall at a shorter time scale. Return levels based on the time aggregate considered are also computed. Parameter estimation using L-moment method for four probability distributions, namely, the generalized extreme value (GEV), generalized logistic (GLO), generalized Pareto (GPA), and Pearson type III (PE3) distributions were conducted. Aided with the L-moment diagram test and mean square error (MSE) test, GLO was found to be the most appropriate distribution to represent the extreme rainfall data. At most time intervals (10, 50, and 100 years), the spatial patterns revealed that the rainfall distribution across the peninsula differ for 1- and 24-h extreme rainfalls. The outcomes of this study would provide additional information regarding patterns of extreme rainfall in Malaysia which may not be detected when considering only a higher time scale such as daily; thus, appropriate measures for shorter time scales of extreme rainfall can be planned. The implementation of such measures would be beneficial to the authorities to reduce the impact of any disastrous natural event.

  12. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    NASA Astrophysics Data System (ADS)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  13. Field Scale Monitoring and Modeling of Water and Chemical Transfer in the Vadose Zone

    USDA-ARS?s Scientific Manuscript database

    Natural resource systems involve highly complex interactions of soil-plant-atmosphere-management components that are extremely difficult to quantitatively describe. Computer simulations for prediction and management of watersheds, water supply areas, and agricultural fields and farms have become inc...

  14. Computer adaptive test approach to the assessment of children and youth with brachial plexus birth palsy.

    PubMed

    Mulcahey, M J; Merenda, Lisa; Tian, Feng; Kozin, Scott; James, Michelle; Gogola, Gloria; Ni, Pengsheng

    2013-01-01

    This study examined the psychometric properties of item pools relevant to upper-extremity function and activity performance and evaluated simulated 5-, 10-, and 15-item computer adaptive tests (CATs). In a multicenter, cross-sectional study of 200 children and youth with brachial plexus birth palsy (BPBP), parents responded to upper-extremity (n = 52) and activity (n = 34) items using a 5-point response scale. We used confirmatory and exploratory factor analysis, ordinal logistic regression, item maps, and standard errors to evaluate the psychometric properties of the item banks. Validity was evaluated using analysis of variance and Pearson correlation coefficients. Results show that the two item pools have acceptable model fit, scaled well for children and youth with BPBP, and had good validity, content range, and precision. Simulated CATs performed comparably to the full item banks, suggesting that a reduced number of items provide similar information to the entire set of items. Copyright © 2013 by the American Occupational Therapy Association, Inc.

  15. [Computer mediated discussion and attitude polarization].

    PubMed

    Shiraishi, Takashi; Endo, Kimihisa; Yoshida, Fujio

    2002-10-01

    This study examined the hypothesis that computer mediated discussions lead to more extreme decisions than face-to-face (FTF) meeting. Kiesler, Siegel, & McGuire (1984) claimed that computer mediated communication (CMC) tended to be relatively uninhibited, as seen in 'flaming', and that group decisions under CMC using Choice Dilemma Questionnaire tended to be more extreme and riskier than FTF meetings. However, for the same reason, CMC discussions on controversial social issues for which participants initially hold strongly opposing views, might be less likely to reach a consensus, and no polarization should occur. Fifteen 4-member groups discussed a controversial social issue under one of three conditions: FTF, CMC, and partition. After discussion, participants rated their position as a group on a 9-point bipolar scale ranging from strong disagreement to strong agreement. A stronger polarization effect was observed for FTF groups than those where members were separated with partitions. However, no extreme shift from their original, individual positions was found for CMC participants. There results were discussed in terms of 'expertise and status equalization' and 'absence of social context cues' under CMC.

  16. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  17. Critical exponents of extremal Kerr perturbations

    NASA Astrophysics Data System (ADS)

    Gralla, Samuel E.; Zimmerman, Peter

    2018-05-01

    We show that scalar, electromagnetic, and gravitational perturbations of extremal Kerr black holes are asymptotically self-similar under the near-horizon, late-time scaling symmetry of the background metric. This accounts for the Aretakis instability (growth of transverse derivatives) as a critical phenomenon associated with the emergent symmetry. We compute the critical exponent of each mode, which is equivalent to its decay rate. It follows from symmetry arguments that, despite the growth of transverse derivatives, all generally covariant scalar quantities decay to zero.

  18. Consolidation of cloud computing in ATLAS

    NASA Astrophysics Data System (ADS)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  19. Test and Evaluation of Architecture-Aware Compiler Environment

    DTIC Science & Technology

    2011-11-01

    biology, medicine, social sciences , and security applications. Challenges include extremely large graphs (the Facebook friend network has over...Operations with Temporal Binning ....................................................................... 32 4.12 Memory behavior and Energy per...five challenge problems empirically, exploring their scaling properties, computation and datatype needs, memory behavior , and temporal behavior

  20. The Need for Optical Means as an Alternative for Electronic Computing

    NASA Technical Reports Server (NTRS)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  1. Science & Technology Review: September 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogt, Ramona L.; Meissner, Caryn N.; Chinn, Ken B.

    2016-09-30

    This is the September issue of the Lawrence Livermore National Laboratory's Science & Technology Review, which communicates, to a broad audience, the Laboratory’s scientific and technological accomplishments in fulfilling its primary missions. This month, there are features on "Laboratory Investments Drive Computational Advances" and "Laying the Groundwork for Extreme-Scale Computing." Research highlights include "Nuclear Data Moves into the 21st Century", "Peering into the Future of Lick Observatory", and "Facility Drives Hydrogen Vehicle Innovations."

  2. Spatial variability of extreme rainfall at radar subpixel scale

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Marra, Francesco; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2018-01-01

    Extreme rainfall is quantified in engineering practice using Intensity-Duration-Frequency curves (IDF) that are traditionally derived from rain-gauges and more recently also from remote sensing instruments, such as weather radars. These instruments measure rainfall at different spatial scales: rain-gauge samples rainfall at the point scale while weather radar averages precipitation on a relatively large area, generally around 1 km2. As such, a radar derived IDF curve is representative of the mean areal rainfall over a given radar pixel and neglects the within-pixel rainfall variability. In this study, we quantify subpixel variability of extreme rainfall by using a novel space-time rainfall generator (STREAP model) that downscales in space the rainfall within a given radar pixel. The study was conducted using a unique radar data record (23 years) and a very dense rain-gauge network in the Eastern Mediterranean area (northern Israel). Radar-IDF curves, together with an ensemble of point-based IDF curves representing the radar subpixel extreme rainfall variability, were developed fitting Generalized Extreme Value (GEV) distributions to annual rainfall maxima. It was found that the mean areal extreme rainfall derived from the radar underestimate most of the extreme values computed for point locations within the radar pixel (on average, ∼70%). The subpixel variability of rainfall extreme was found to increase with longer return periods and shorter durations (e.g. from a maximum variability of 10% for a return period of 2 years and a duration of 4 h to 30% for 50 years return period and 20 min duration). For the longer return periods, a considerable enhancement of extreme rainfall variability was found when stochastic (natural) climate variability was taken into account. Bounding the range of the subpixel extreme rainfall derived from radar-IDF can be of major importance for different applications that require very local estimates of rainfall extremes.

  3. Are X-rays the key to integrated computational materials engineering?

    DOE PAGES

    Ice, Gene E.

    2015-11-01

    The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less

  4. Multiple multicontrol unitary operations: Implementation and applications

    NASA Astrophysics Data System (ADS)

    Lin, Qing

    2018-04-01

    The efficient implementation of computational tasks is critical to quantum computations. In quantum circuits, multicontrol unitary operations are important components. Here, we present an extremely efficient and direct approach to multiple multicontrol unitary operations without decomposition to CNOT and single-photon gates. With the proposed approach, the necessary two-photon operations could be reduced from O( n 3) with the traditional decomposition approach to O( n), which will greatly relax the requirements and make large-scale quantum computation feasible. Moreover, we propose the potential application to the ( n- k)-uniform hypergraph state.

  5. RICIS research

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.

    1987-01-01

    The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.

  6. Extreme Scale Computing Studies

    DTIC Science & Technology

    2010-12-01

    PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT. *//Signature// //Signature// KERRY HILL, Program Manager BRADLEY J ...Research Institute William Carlson Institute for Defense Analyses William Dally Stanford University Monty Denneau IBM T. J . Watson Research...for Defense Analyses William Dally, Stanford University Monty Denneau, IBM T. J . Watson Research Laboratories Paul Franzon, North Carolina State

  7. ExScal Backbone Network Architecture

    DTIC Science & Technology

    2005-01-01

    802.11 battery powered nodes was laid over the sensor network. We adopted the Stargate platform for the backbone tier to serve as the basis for...its head. XSS Hardware and Network: XSS stands for eXtreme Scaling Stargate . A stargate is a linux-based single board computer. It has a 400 MHz

  8. Target matching based on multi-view tracking

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Zhou, Changsheng

    2011-01-01

    A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.

  9. Characterization and prediction of extreme events in turbulence

    NASA Astrophysics Data System (ADS)

    Fonda, Enrico; Iyer, Kartik P.; Sreenivasan, Katepalli R.

    2017-11-01

    Extreme events in Nature such as tornadoes, large floods and strong earthquakes are rare but can have devastating consequences. The predictability of these events is very limited at present. Extreme events in turbulence are the very large events in small scales that are intermittent in character. We examine events in energy dissipation rate and enstrophy which are several tens to hundreds to thousands of times the mean value. To this end we use our DNS database of homogeneous and isotropic turbulence with Taylor Reynolds numbers spanning a decade, computed with different small scale resolutions and different box sizes, and study the predictability of these events using machine learning. We start with an aggressive data augmentation to virtually increase the number of these rare events by two orders of magnitude and train a deep convolutional neural network to predict their occurrence in an independent data set. The goal of the work is to explore whether extreme events can be predicted with greater assurance than can be done by conventional methods (e.g., D.A. Donzis & K.R. Sreenivasan, J. Fluid Mech. 647, 13-26, 2010).

  10. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  11. Crossing disciplines and scales to understand the critical zone

    USGS Publications Warehouse

    Brantley, S.L.; Goldhaber, M.B.; Vala, Ragnarsdottir K.

    2007-01-01

    The Critical Zone (CZ) is the system of coupled chemical, biological, physical, and geological processes operating together to support life at the Earth's surface. While our understanding of this zone has increased over the last hundred years, further advance requires scientists to cross disciplines and scales to integrate understanding of processes in the CZ, ranging in scale from the mineral-water interface to the globe. Despite the extreme heterogeneities manifest in the CZ, patterns are observed at all scales. Explanations require the use of new computational and analytical tools, inventive interdisciplinary approaches, and growing networks of sites and people.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, Edmond

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  13. ExM:System Support for Extreme-Scale, Many-Task Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Daniel S

    The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastestmore » computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less

  15. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  16. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  17. Rapid, high-resolution measurement of leaf area and leaf orientation using terrestrial LiDAR scanning data

    USDA-ARS?s Scientific Manuscript database

    The rapid evolution of high performance computing technology has allowed for the development of extremely detailed models of the urban and natural environment. Although models can now represent sub-meter-scale variability in environmental geometry, model users are often unable to specify the geometr...

  18. GROMACS 4:  Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.

    PubMed

    Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik

    2008-03-01

    Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

  19. Evaluating the fidelity of CMIP5 models in producing large-scale meteorological patterns over the Northwestern United States

    NASA Astrophysics Data System (ADS)

    Lintner, B. R.; Loikith, P. C.; Pike, M.; Aragon, C.

    2017-12-01

    Climate change information is increasingly required at impact-relevant scales. However, most state-of-the-art climate models are not of sufficiently high spatial resolution to resolve features explicitly at such scales. This challenge is particularly acute in regions of complex topography, such as the Pacific Northwest of the United States. To address this scale mismatch problem, we consider large-scale meteorological patterns (LSMPs), which can be resolved by climate models and associated with the occurrence of local scale climate and climate extremes. In prior work, using self-organizing maps (SOMs), we computed LSMPs over the northwestern United States (NWUS) from daily reanalysis circulation fields and further related these to the occurrence of observed extreme temperatures and precipitation: SOMs were used to group LSMPs into 12 nodes or clusters spanning the continuum of synoptic variability over the regions. Here this observational foundation is utilized as an evaluation target for a suite of global climate models from the Fifth Phase of the Coupled Model Intercomparison Project (CMIP5). Evaluation is performed in two primary ways. First, daily model circulation fields are assigned to one of the 12 reanalysis nodes based on minimization of the mean square error. From this, a bulk model skill score is computed measuring the similarity between the model and reanalysis nodes. Next, SOMs are applied directly to the model output and compared to the nodes obtained from reanalysis. Results reveal that many of the models have LSMPs analogous to the reanalysis, suggesting that the models reasonably capture observed daily synoptic states.

  20. Lattice Boltzmann for Airframe Noise Predictions

    NASA Technical Reports Server (NTRS)

    Barad, Michael; Kocheemoolayil, Joseph; Kiris, Cetin

    2017-01-01

    Increase predictive use of High-Fidelity Computational Aero- Acoustics (CAA) capabilities for NASA's next generation aviation concepts. CFD has been utilized substantially in analysis and design for steady-state problems (RANS). Computational resources are extremely challenged for high-fidelity unsteady problems (e.g. unsteady loads, buffet boundary, jet and installation noise, fan noise, active flow control, airframe noise, etc) ü Need novel techniques for reducing the computational resources consumed by current high-fidelity CAA Need routine acoustic analysis of aircraft components at full-scale Reynolds number from first principles Need an order of magnitude reduction in wall time to solution!

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  2. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    NASA Astrophysics Data System (ADS)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  3. Workshop report on large-scale matrix diagonalization methods in chemistry theory institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S.

    The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems asmore » well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of« less

  4. Simulation Methods for Optics and Electromagnetics in Complex Geometries and Extreme Nonlinear Regimes with Disparate Scales

    DTIC Science & Technology

    2014-09-30

    software devel- oped with this project support. S1 Cork School 2013: I. UPPEcore Simulator design and usage, Simulation examples II. Nonlinear pulse...pulse propagation 08/28/13 — 08/02/13, University College Cork , Ireland S2 ACMS MURI School 2012: Computational Methods for Nonlinear PDEs describing

  5. The Relationships Between the Trends of Mean and Extreme Precipitation

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Lau, William K.-M.

    2017-01-01

    This study provides a better understanding of the relationships between the trends of mean and extreme precipitation in two observed precipitation data sets: the Climate Prediction Center Unified daily precipitation data set and the Global Precipitation Climatology Program (GPCP) pentad data set. The study employs three kinds of definitions of extreme precipitation: (1) percentile, (2) standard deviation and (3) generalize extreme value (GEV) distribution analysis for extreme events based on local statistics. Relationship between trends in the mean and extreme precipitation is identified with a novel metric, i.e. area aggregated matching ratio (AAMR) computed on regional and global scales. Generally, more (less) extreme events are likely to occur in regions with a positive (negative) mean trend. The match between the mean and extreme trends deteriorates for increasingly heavy precipitation events. The AAMR is higher in regions with negative mean trends than in regions with positive mean trends, suggesting a higher likelihood of severe dry events, compared with heavy rain events in a warming climate. AAMR is found to be higher in tropics and oceans than in the extratropics and land regions, reflecting a higher degree of randomness and more important dynamical rather than thermodynamical contributions of extreme events in the latter regions.

  6. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  7. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Geveci, Berk

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less

  8. Force Field Accelerated Density Functional Theory Molecular Dynamics for Simulation of Reactive Systems at Extreme Conditions

    NASA Astrophysics Data System (ADS)

    Lindsey, Rebecca; Goldman, Nir; Fried, Laurence

    2017-06-01

    Atomistic modeling of chemistry at extreme conditions remains a challenge, despite continuing advances in computing resources and simulation tools. While first principles methods provide a powerful predictive tool, the time and length scales associated with chemistry at extreme conditions (ns and μm, respectively) largely preclude extension of such models to molecular dynamics. In this work, we develop a simulation approach that retains the accuracy of density functional theory (DFT) while decreasing computational effort by several orders of magnitude. We generate n-body descriptions for atomic interactions by mapping forces arising from short density functional theory (DFT) trajectories on to simple Chebyshev polynomial series. We examine the importance of including greater than 2-body interactions, model transferability to different state points, and discuss approaches to ensure smooth and reasonable model shape outside of the distance domain sampled by the DFT training set. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. An evaluation of the state of time synchronization on leadership class supercomputers

    DOE PAGES

    Jones, Terry; Ostrouchov, George; Koenig, Gregory A.; ...

    2017-10-09

    We present a detailed examination of time agreement characteristics for nodes within extreme-scale parallel computers. Using a software tool we introduce in this paper, we quantify attributes of clock skew among nodes in three representative high-performance computers sited at three national laboratories. Our measurements detail the statistical properties of time agreement among nodes and how time agreement drifts over typical application execution durations. We discuss the implications of our measurements, why the current state of the field is inadequate, and propose strategies to address observed shortcomings.

  10. An evaluation of the state of time synchronization on leadership class supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry; Ostrouchov, George; Koenig, Gregory A.

    We present a detailed examination of time agreement characteristics for nodes within extreme-scale parallel computers. Using a software tool we introduce in this paper, we quantify attributes of clock skew among nodes in three representative high-performance computers sited at three national laboratories. Our measurements detail the statistical properties of time agreement among nodes and how time agreement drifts over typical application execution durations. We discuss the implications of our measurements, why the current state of the field is inadequate, and propose strategies to address observed shortcomings.

  11. Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it

    2012-12-01

    A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less

  12. A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge

    DTIC Science & Technology

    2016-07-29

    Science Foundation (NSF), Department of Defense (DOD), National Institute of Standards and Technology (NIST), Intelligence Community (IC) Introduction...multiple Federal agencies: • Intelligent big data sensors that act autonomously and are programmable via the network for increased flexibility, and... intelligence for scientific discovery enabled by rapid extreme-scale data analysis, capable of understanding and making sense of results and thereby

  13. PuLP/XtraPuLP : Partitioning Tools for Extreme-Scale Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slota, George M; Rajamanickam, Sivasankaran; Madduri, Kamesh

    2017-09-21

    PuLP/XtraPulp is software for partitioning graphs from several real-world problems. Graphs occur in several places in real world from road networks, social networks and scientific simulations. For efficient parallel processing these graphs have to be partitioned (split) with respect to metrics such as computation and communication costs. Our software allows such partitioning for massive graphs.

  14. A Unified Data-Driven Approach for Programming In Situ Analysis and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, Alex

    The placement and movement of data is becoming the key limiting factor on both performance and energy efficiency of high performance computations. As systems generate more data, it is becoming increasingly difficult to actually move that data elsewhere for post-processing, as the rate of improvements in supporting I/O infrastructure is not keeping pace. Together, these trends are creating a shift in how we think about exascale computations, from a viewpoint that focuses on FLOPS to one that focuses on data and data-centric operations as fundamental to the reasoning about, and optimization of, scientific workflows on extreme-scale architectures. The overarching goalmore » of our effort was the study of a unified data-driven approach for programming applications and in situ analysis and visualization. Our work was to understand the interplay between data-centric programming model requirements at extreme-scale and the overall impact of those requirements on the design, capabilities, flexibility, and implementation details for both applications and the supporting in situ infrastructure. In this context, we made many improvements to the Legion programming system (one of the leading data-centric models today) and demonstrated in situ analyses on real application codes using these improvements.« less

  15. A multiply-add engine with monolithically integrated 3D memristor crossbar/CMOS hybrid circuit.

    PubMed

    Chakrabarti, B; Lastras-Montaño, M A; Adam, G; Prezioso, M; Hoskins, B; Payvand, M; Madhavan, A; Ghofrani, A; Theogarajan, L; Cheng, K-T; Strukov, D B

    2017-02-14

    Silicon (Si) based complementary metal-oxide semiconductor (CMOS) technology has been the driving force of the information-technology revolution. However, scaling of CMOS technology as per Moore's law has reached a serious bottleneck. Among the emerging technologies memristive devices can be promising for both memory as well as computing applications. Hybrid CMOS/memristor circuits with CMOL (CMOS + "Molecular") architecture have been proposed to combine the extremely high density of the memristive devices with the robustness of CMOS technology, leading to terabit-scale memory and extremely efficient computing paradigm. In this work, we demonstrate a hybrid 3D CMOL circuit with 2 layers of memristive crossbars monolithically integrated on a pre-fabricated CMOS substrate. The integrated crossbars can be fully operated through the underlying CMOS circuitry. The memristive devices in both layers exhibit analog switching behavior with controlled tunability and stable multi-level operation. We perform dot-product operations with the 2D and 3D memristive crossbars to demonstrate the applicability of such 3D CMOL hybrid circuits as a multiply-add engine. To the best of our knowledge this is the first demonstration of a functional 3D CMOL hybrid circuit.

  16. A multiply-add engine with monolithically integrated 3D memristor crossbar/CMOS hybrid circuit

    PubMed Central

    Chakrabarti, B.; Lastras-Montaño, M. A.; Adam, G.; Prezioso, M.; Hoskins, B.; Cheng, K.-T.; Strukov, D. B.

    2017-01-01

    Silicon (Si) based complementary metal-oxide semiconductor (CMOS) technology has been the driving force of the information-technology revolution. However, scaling of CMOS technology as per Moore’s law has reached a serious bottleneck. Among the emerging technologies memristive devices can be promising for both memory as well as computing applications. Hybrid CMOS/memristor circuits with CMOL (CMOS + “Molecular”) architecture have been proposed to combine the extremely high density of the memristive devices with the robustness of CMOS technology, leading to terabit-scale memory and extremely efficient computing paradigm. In this work, we demonstrate a hybrid 3D CMOL circuit with 2 layers of memristive crossbars monolithically integrated on a pre-fabricated CMOS substrate. The integrated crossbars can be fully operated through the underlying CMOS circuitry. The memristive devices in both layers exhibit analog switching behavior with controlled tunability and stable multi-level operation. We perform dot-product operations with the 2D and 3D memristive crossbars to demonstrate the applicability of such 3D CMOL hybrid circuits as a multiply-add engine. To the best of our knowledge this is the first demonstration of a functional 3D CMOL hybrid circuit. PMID:28195239

  17. Contributions of Dynamic and Thermodynamic Scaling in Subdaily Precipitation Extremes in India

    NASA Astrophysics Data System (ADS)

    Ali, Haider; Mishra, Vimal

    2018-03-01

    Despite the importance of subdaily precipitation extremes for urban areas, the role of dynamic and thermodynamic scaling in changes in precipitation extremes in India remains poorly constrained. Here we estimate contributions from thermodynamic and dynamic scaling on changes in subdaily precipitation extremes for 23 urban locations in India. Subdaily precipitation extremes have become more intense during the last few decades. Moreover, we find a twofold rise in the frequency of subdaily precipitation extremes during 1979-2015, which is faster than the increase in daily precipitation extremes. The contribution of dynamic scaling in this rise in the frequency and intensity of subdaily precipitation extremes is higher than the thermodynamic scaling. Moreover, half-hourly precipitation extremes show higher contributions from the both thermodynamic ( 10%/K) and dynamic ( 15%/K) scaling than daily (6%/K and 9%/K, respectively) extremes indicating the role of warming on the rise in the subdaily precipitation extremes in India. Our findings have implications for better understanding the dynamic response of precipitation extremes under the warming climate over India.

  18. [Upper extremities, neck and back symptoms in office employees working at computer stations].

    PubMed

    Zejda, Jan E; Bugajska, Joanna; Kowalska, Małgorzata; Krzych, Lukasz; Mieszkowska, Marzena; Brozek, Grzegorz; Braczkowska, Bogumiła

    2009-01-01

    To obtain current data on the occurrence ofwork-related symptoms of office computer users in Poland we implemented a questionnaire survey. Its goal was to assess the prevalence and intensity of symptoms of upper extremities, neck and back in office workers who use computers on a regular basis, and to find out if the occurrence of symptoms depends on the duration of computer use and other work-related factors. Office workers in two towns (Warszawa and Katowice), employed in large social services companies, were invited to fill in the Polish version of Nordic Questionnaire. The questions included work history and history of last-week symptoms of pain of hand/wrist, elbow, arm, neck and upper and lower back (occurrence and intensity measured by visual scale). Altogether 477 men and women returned the completed questionnaires. Between-group symptom differences (chi-square test) were verified by multivariate analysis (GLM). The prevalence of symptoms in individual body parts was as follows: neck, 55.6%; arm, 26.9%; elbow, 13.3%; wrist/hand, 29.9%; upper back, 49.6%; and lower back, 50.1%. Multivariate analysis confirmed the effect of gender, age and years of computer use on the occurrence of symptoms. Among other determinants, forearm support explained pain of wrist/hand, wrist support of elbow pain, and chair adjustment of arm pain. Association was also found between low back pain and chair adjustment and keyboard position. The findings revealed frequent occurrence of symptoms of pain in upper extremities and neck in office workers who use computers on a regular basis. Seating position could also contribute to the frequent occurrence of back pain in the examined population.

  19. Large Scale Meteorological Pattern of Extreme Rainfall in Indonesia

    NASA Astrophysics Data System (ADS)

    Kuswanto, Heri; Grotjahn, Richard; Rachmi, Arinda; Suhermi, Novri; Oktania, Erma; Wijaya, Yosep

    2014-05-01

    Extreme Weather Events (EWEs) cause negative impacts socially, economically, and environmentally. Considering these facts, forecasting EWEs is crucial work. Indonesia has been identified as being among the countries most vulnerable to the risk of natural disasters, such as floods, heat waves, and droughts. Current forecasting of extreme events in Indonesia is carried out by interpreting synoptic maps for several fields without taking into account the link between the observed events in the 'target' area with remote conditions. This situation may cause misidentification of the event leading to an inaccurate prediction. Grotjahn and Faure (2008) compute composite maps from extreme events (including heat waves and intense rainfall) to help forecasters identify such events in model output. The composite maps show large scale meteorological patterns (LSMP) that occurred during historical EWEs. Some vital information about the EWEs can be acquired from studying such maps, in addition to providing forecaster guidance. Such maps have robust mid-latitude meteorological patterns (for Sacramento and California Central Valley, USA EWEs). We study the performance of the composite approach for tropical weather condition such as Indonesia. Initially, the composite maps are developed to identify and forecast the extreme weather events in Indramayu district- West Java, the main producer of rice in Indonesia and contributes to about 60% of the national total rice production. Studying extreme weather events happening in Indramayu is important since EWEs there affect national agricultural and fisheries activities. During a recent EWE more than a thousand houses in Indramayu suffered from serious flooding with each home more than one meter underwater. The flood also destroyed a thousand hectares of rice plantings in 5 regencies. Identifying the dates of extreme events is one of the most important steps and has to be carried out carefully. An approach has been applied to identify the dates involving observations from multiple sites (rain gauges). The approach combines the POT (Peaks Over Threshold) with 'declustering' of the data to approximate independence based on the autocorrelation structure of each rainfall series. The cross correlation among sites is considered also to develop the event's criteria yielding a rational choice of the extreme dates given the 'spotty' nature of the intense convection. Based on the identified dates, we are developing a supporting tool for forecasting extreme rainfall based on the corresponding large-scale meteorological patterns (LSMPs). The LSMPs methodology focuses on the larger-scale patterns that the model are better able to forecast, as those larger-scale patterns create the conditions fostering the local EWE. Bootstrap resampling method is applied to highlight the key features that statistically significant with the extreme events. Grotjahn, R., and G. Faure. 2008: Composite Predictor Maps of Extraordinary Weather Events in the Sacramento California Region. Weather and Forecasting. 23: 313-335.

  20. Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coon, Ethan; Porter, Mark L.; Kang, Qinjun

    2012-06-18

    In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less

  1. Simulating the Thermal Response of High Explosives on Time Scales of Days to Microseconds

    NASA Astrophysics Data System (ADS)

    Yoh, Jack J.; McClelland, Matthew A.

    2004-07-01

    We present an overview of computational techniques for simulating the thermal cookoff of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the response of energetic materials systems exposed to extreme thermal environments, such as fires. We consider an idealized model process for a confined explosive involving the transition from slow heating to rapid deflagration in which the time scale changes from days to hundreds of microseconds. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics.

  2. A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2016-12-01

    Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.

  3. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  4. Tropical precipitation extremes: Response to SST-induced warming in aquaplanet simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Teixeira, João.

    2017-04-01

    Scaling of tropical precipitation extremes in response to warming is studied in aquaplanet experiments using the global Weather Research and Forecasting (WRF) model. We show how the scaling of precipitation extremes is highly sensitive to spatial and temporal averaging: while instantaneous grid point extreme precipitation scales more strongly than the percentage increase (˜7% K-1) predicted by the Clausius-Clapeyron (CC) relationship, extremes for zonally and temporally averaged precipitation follow a slight sub-CC scaling, in agreement with results from Climate Model Intercomparison Project (CMIP) models. The scaling depends crucially on the employed convection parameterization. This is particularly true when grid point instantaneous extremes are considered. These results highlight how understanding the response of precipitation extremes to warming requires consideration of dynamic changes in addition to the thermodynamic response. Changes in grid-scale precipitation, unlike those in convective-scale precipitation, scale linearly with the resolved flow. Hence, dynamic changes include changes in both large-scale and convective-scale motions.

  5. Finding a roadmap to achieve large neuromorphic hardware systems

    PubMed Central

    Hasler, Jennifer; Marr, Bo

    2013-01-01

    Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time. PMID:24058330

  6. Using Rollback Avoidance to Mitigate Failures in Next-Generation Extreme-Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, Scott N.

    2016-05-01

    High-performance computing (HPC) systems enable scientists to numerically model complex phenomena in many important physical systems. The next major milestone in the development of HPC systems is the construction of the rst supercomputer capable executing more than an exa op, 10 18 oating point operations per second. On systems of this scale, failures will occur much more frequently than on current systems. As a result, resilience is a key obstacle to building next-generation extremescale systems. Coordinated checkpointing is currently the most widely-used mechanism for handling failures on HPC systems. Although coordinated checkpointing remains e ective on current systems, increasing themore » scale of today's systems to build next-generation systems will increase the cost of fault tolerance as more and more time is taken away from the application to protect against or recover from failure. Rollback avoidance techniques seek to mitigate the cost of checkpoint/restart by allowing an application to continue its execution rather than rolling back to an earlier checkpoint when failures occur. These techniqes include failure prediction and preventive migration, replicated computation, fault-tolerant algorithms, and softwarebased memory fault correction. In this thesis, we examine how rollback avoidance techniques can be used to address failures on extreme-scale systems. Using a combination of analytic modeling and simulation, we evaluate the potential impact of rollback avoidance on these systems. We then present a novel rollback avoidance technique that exploits similarities in application memory. Finally, we examine the feasibility of using this technique to protect against memory faults in kernel memory.« less

  7. High-resolution downscaling for hydrological management

    NASA Astrophysics Data System (ADS)

    Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos

    2017-04-01

    Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management; ) aims to bridge the gap between the needs of hydrological modellers and planners, and the currently available range of climate data, with the overarching aim of providing adaptation strategies for climate change-related challenges. Producing the kilometre- and sub-daily-scale climate data needed by hydrologists through continuous simulations is generally computationally infeasible. To circumvent this hurdle, we adopt a two-pronged approach involving (1) selective dynamical downscaling and (2) conditional stochastic weather generators, with the former presented here. We take an event-based approach to downscaling in order to achieve the kilometre-scale input needed by hydrological modellers. Computational expenses are minimized by identifying extremal weather patterns for each BINGO research site in lower-resolution simulations and then only downscaling to the kilometre-scale (convection permitting) those events during which such patterns occur. Here we (1) outline the methodology behind the selection of the events, and (2) compare the modelled precipitation distribution and variability (preconditioned on the extremal weather patterns) with that found in observations.

  8. Nearly extremal apparent horizons in simulations of merging black holes

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey; Scheel, Mark A.; Owen, Robert; Giesler, Matthew; Katebi, Reza; Szilágyi, Béla; Chu, Tony; Demos, Nicholas; Hemberger, Daniel A.; Kidder, Lawrence E.; Pfeiffer, Harald P.; Afshari, Nousha

    2015-03-01

    The spin angular momentum S of an isolated Kerr black hole is bounded by the surface area A of its apparent horizon: 8π S≤slant A, with equality for extremal black holes. In this paper, we explore the extremality of individual and common apparent horizons for merging, rapidly spinning binary black holes. We consider simulations of merging black holes with equal masses M and initial spin angular momenta aligned with the orbital angular momentum, including new simulations with spin magnitudes up to S/{{M}2}=0.994. We measure the area and (using approximate Killing vectors) the spin on the individual and common apparent horizons, finding that the inequality 8π S\\lt A is satisfied in all cases but is very close to equality on the common apparent horizon at the instant it first appears. We also evaluate the Booth-Fairhurst extremality, whose value for a given apparent horizon depends on the scaling of the horizon’s null normal vectors. In particular, we introduce a gauge-invariant lower bound on the extremality by computing the smallest value that Booth and Fairhurst’s extremality parameter can take for any scaling. Using this lower bound, we conclude that the common horizons are at least moderately close to extremal just after they appear. Finally, following Lovelace et al (2008 Phys. Rev. D 78 084017), we construct quasiequilibrium binary-black hole initial data with ‘overspun’ marginally trapped surfaces with 8π S\\gt A. We show that the overspun surfaces are indeed superextremal: our lower bound on their Booth-Fairhurst extremality exceeds unity. However, we confirm that these superextremal surfaces are always surrounded by marginally outer trapped surfaces (i.e., by apparent horizons) with 8π S\\lt A. The extremality lower bound on the enclosing apparent horizon is always less than unity but can exceed the value for an extremal Kerr black hole.

  9. Programming Probabilistic Structural Analysis for Parallel Processing Computer

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.

    1991-01-01

    The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.

  10. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  11. Joint probabilities of extreme precipitation and wind gusts in Germany

    NASA Astrophysics Data System (ADS)

    von Waldow, H.; Martius, O.

    2012-04-01

    Extreme meteorological events such as storms, heavy rain, floods, droughts and heat waves can have devastating consequences for human health, infrastructure and ecosystems. Concomitantly occurring extreme events might interact synergistically to produce a particularly hazardous impact. The joint occurrence of droughts and heat waves, for example, can have a very different impact on human health and ecosystems both in quantity and quality, than just one of the two extreme events. The co-occurrence of certain types of extreme events is plausible from physical and dynamical considerations, for example heavy precipitation and high wind speeds in the pathway of strong extratropical cyclones. The winter storm Kyrill not only caused wind gust speeds well in excess of 30 m/s across Europe, but also brought 24 h precipitation sums greater than the mean January accumulations in some regions. However, the existence of such compound risks is currently not accounted for by insurance companies, who assume independence of extreme weather events to calculate their premiums. While there are established statistical methods to model the extremes of univariate meteorological variables, the modelling of multidimensional extremes calls for an approach that is tailored to the specific problem at hand. A first step involves defining extreme bivariate wind/precipitation events. Because precipitation and wind gusts caused by the same cyclone or convective cell do not occur at exactly the same location and at the same time, it is necessary to find a sound definition of "extreme compound event" for this case. We present a data driven method to choose appropriate time and space intervals that define "concomitance" for wind and precipitation extremes. Based on station data of wind speed and gridded precipitation data, we arrive at time and space intervals that compare well with the typical time and space scales of extratropical cyclones, i.e. a maximum time lag of 1 day and a maximum distance of about 300 km between associated wind and rain events. After modelling extreme precipitation and wind separately, we explore the practicability of characterising their joint distribution using a bivariate threshold excess model. In particular, we present different dependence measures and report about the computational feasibility and available computer codes.

  12. Avoiding the Enumeration of Infeasible Elementary Flux Modes by Including Transcriptional Regulatory Rules in the Enumeration Process Saves Computational Costs

    PubMed Central

    Jungreuthmayer, Christian; Ruckerbauer, David E.; Gerstl, Matthias P.; Hanscho, Michael; Zanghellini, Jürgen

    2015-01-01

    Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic networks. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, computational costs, such as runtime, memory usage, and disk space, are extremely reduced. Moreover, we show that the application of transcriptional rules identifies non-trivial system-wide effects on metabolism. Using the presented algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new and much higher limits without the loss of predictive quality. This makes unbiased, system-wide predictions in large scale metabolic networks possible without resorting to any optimization principle. PMID:26091045

  13. Inland and coastal flooding: developments in prediction and prevention.

    PubMed

    Hunt, J C R

    2005-06-15

    We review the scientific and engineering understanding of various types of inland and coastal flooding by considering the different causes and dynamic processes involved, especially in extreme events. Clear progress has been made in the accuracy of numerical modelling of meteorological causes of floods, hydraulics of flood water movement and coastal wind-wave-surge. Probabilistic estimates from ensemble predictions and the simultaneous use of several models are recent techniques in meteorological prediction that could be considered for hydraulic and oceanographic modelling. The contribution of remotely sensed data from aircraft and satellites is also considered. The need to compare and combine statistical and computational modelling methodologies for long range forecasts and extreme events is emphasized, because this has become possible with the aid of kilometre scale computations and network grid facilities to simulate and analyse time-series and extreme events. It is noted that despite the adverse effects of climatic trends on flooding, appropriate planning of rapidly growing urban areas could mitigate some of the worst effects. However, resources for flood prevention, including research, have to be considered in relation to those for other natural disasters. Policies have to be relevant to the differing geology, meteorology and cultures of the countries affected.

  14. Integration of nanoscale memristor synapses in neuromorphic computing architectures

    NASA Astrophysics Data System (ADS)

    Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis

    2013-09-01

    Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.

  15. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  16. Changes in seasonal streamflow extremes experienced in rivers of Northwestern South America (Colombia)

    NASA Astrophysics Data System (ADS)

    Pierini, J. O.; Restrepo, J. C.; Aguirre, J.; Bustamante, A. M.; Velásquez, G. J.

    2017-04-01

    A measure of the variability in seasonal extreme streamflow was estimated for the Colombian Caribbean coast, using monthly time series of freshwater discharge from ten watersheds. The aim was to detect modifications in the streamflow monthly distribution, seasonal trends, variance and extreme monthly values. A 20-year length time moving window, with 1-year successive shiftments, was applied to the monthly series to analyze the seasonal variability of streamflow. The seasonal-windowed data were statistically fitted through the Gamma distribution function. Scale and shape parameters were computed using the Maximum Likelihood Estimation (MLE) and the bootstrap method for 1000 resample. A trend analysis was performed for each windowed-serie, allowing to detect the window of maximum absolute values for trends. Significant temporal shifts in seasonal streamflow distribution and quantiles (QT), were obtained for different frequencies. Wet and dry extremes periods increased significantly in the last decades. Such increase did not occur simultaneously through the region. Some locations exhibited continuous increases only at minimum QT.

  17. eXascale PRogramming Environment and System Software (XPRESS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Barbara; Gabriel, Edgar

    Exascale systems, with a thousand times the compute capacity of today’s leading edge petascale computers, are expected to emerge during the next decade. Their software systems will need to facilitate the exploitation of exceptional amounts of concurrency in applications, and ensure that jobs continue to run despite the occurrence of system failures and other kinds of hard and soft errors. Adapting computations at runtime to cope with changes in the execution environment, as well as to improve power and performance characteristics, is likely to become the norm. As a result, considerable innovation is required to develop system support to meetmore » the needs of future computing platforms. The XPRESS project aims to develop and prototype a revolutionary software system for extreme-­scale computing for both exascale and strong­scaled problems. The XPRESS collaborative research project will advance the state-­of-­the-­art in high performance computing and enable exascale computing for current and future DOE mission-­critical applications and supporting systems. The goals of the XPRESS research project are to: A. enable exascale performance capability for DOE applications, both current and future, B. develop and deliver a practical computing system software X-­stack, OpenX, for future practical DOE exascale computing systems, and C. provide programming methods and environments for effective means of expressing application and system software for portable exascale system execution.« less

  18. Lightweight computational steering of very large scale molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less

  19. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  20. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  1. Computational approach on PEB process in EUV resist: multi-scale simulation

    NASA Astrophysics Data System (ADS)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2017-03-01

    For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.

  2. Quantum information processing with long-wavelength radiation

    NASA Astrophysics Data System (ADS)

    Murgia, David; Weidt, Sebastian; Randall, Joseph; Lekitsch, Bjoern; Webster, Simon; Navickas, Tomas; Grounds, Anton; Rodriguez, Andrea; Webb, Anna; Standing, Eamon; Pearce, Stuart; Sari, Ibrahim; Kiang, Kian; Rattanasonti, Hwanjit; Kraft, Michael; Hensinger, Winfried

    To this point, the entanglement of ions has predominantly been performed using lasers. Using long wavelength radiation with static magnetic field gradients provides an architecture to simplify construction of a large scale quantum computer. The use of microwave-dressed states protects against decoherence from fluctuating magnetic fields, with radio-frequency fields used for qubit manipulation. I will report the realisation of spin-motion entanglement using long-wavelength radiation, and a new method to efficiently prepare dressed-state qubits and qutrits, reducing experimental complexity of gate operations. I will also report demonstration of ground state cooling using long wavelength radiation, which may increase two-qubit entanglement fidelity. I will then report demonstration of a high-fidelity long-wavelength two-ion quantum gate using dressed states. Combining these results with microfabricated ion traps allows for scaling towards a large scale ion trap quantum computer, and provides a platform for quantum simulations of fundamental physics. I will report progress towards the operation of microchip ion traps with extremely high magnetic field gradients for multi-ion quantum gates.

  3. Development and initial psychometric evaluation of an item bank created to measure upper extremity function in persons with stroke.

    PubMed

    Higgins, Johanne; Finch, Lois E; Kopec, Jacek; Mayo, Nancy E

    2010-02-01

    To create and illustrate the development of a method to parsimoniously and hierarchically assess upper extremity function in persons after stroke. Data were analyzed using Rasch analysis. Re-analysis of data from 8 studies involving persons after stroke. Over 4000 patients with stroke who participated in various studies in Montreal and elsewhere in Canada. Data comprised 17 tests or indices of upper extremity function and health-related quality of life, for a total of 99 items related to upper extremity function. Tests and indices included, among others, the Box and Block Test, the Nine-Hole Peg Test and the Stroke Impact Scale. Data were collected at various times post-stroke from 3 days to 1 year. Once the data fit the model, a bank of items measuring upper extremity function with persons and items organized hierarchically by difficulty and ability in log units was produced. This bank forms the basis for eventual computer adaptive testing. The calibration of the items should be tested further psychometrically, as should the interpretation of the metric arising from using the item calibration to measure the upper extremity of individuals.

  4. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  5. Accurate and efficient calculation of response times for groundwater flow

    NASA Astrophysics Data System (ADS)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  6. ASC ATDM Level 2 Milestone #5325: Asynchronous Many-Task Runtime System Analysis and Assessment for Next Generation Platforms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Gavin Matthew; Bettencourt, Matthew Tyler; Bova, Steven W.

    2015-09-01

    This report provides in-depth information and analysis to help create a technical road map for developing next- generation Orogramming mocleN and runtime systemsl that support Advanced Simulation and Computing (ASC) work- load requirements. The focus herein is on 4synchronous many-task (AMT) model and runtime systems, which are of great interest in the context of "Oriascale7 computing, as they hold the promise to address key issues associated with future extreme-scale computer architectures. This report includes a thorough qualitative and quantitative examination of three best-of-class AIM] runtime systemsHCharm-HE, Legion, and Uintah, all of which are in use as part of the Centers.more » The studies focus on each of the runtimes' programmability, performance, and mutability. Through the experiments and analysis presented, several overarching Predictive Science Academic Alliance Program II (PSAAP-II) Ascl findings emerge. From a performance perspective, AIVT11runtimes show tremendous potential for addressing extreme- scale challenges. Empirical studies show an AM11 runtime can mitigate performance heterogeneity inherent to the machine itself and that Message Passing Interface (MP1) and AM11runtimes perform comparably under balanced con- ditions. From a programmability and mutability perspective however, none of the runtimes in this study are currently ready for use in developing production-ready Sandia ASCIapplications. The report concludes by recommending a co- design path forward, wherein application, programming model, and runtime system developers work together to define requirements and solutions. Such a requirements-driven co-design approach benefits the community as a whole, with widespread community engagement mitigating risk for both application developers developers. and high-performance computing inntime systein« less

  7. Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fingersh, Lee J; Loth, Eric; Kaminski, Meghan

    2017-06-09

    A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3more » wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.« less

  8. Extreme-Scale Stochastic Particle Tracing for Uncertain Unsteady Flow Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Hanqi; He, Wenbin; Seo, Sangmin

    2016-11-13

    We present an efficient and scalable solution to estimate uncertain transport behaviors using stochastic flow maps (SFM,) for visualizing and analyzing uncertain unsteady flows. SFM computation is extremely expensive because it requires many Monte Carlo runs to trace densely seeded particles in the flow. We alleviate the computational cost by decoupling the time dependencies in SFMs so that we can process adjacent time steps independently and then compose them together for longer time periods. Adaptive refinement is also used to reduce the number of runs for each location. We then parallelize over tasks—packets of particles in our design—to achieve highmore » efficiency in MPI/thread hybrid programming. Such a task model also enables CPU/GPU coprocessing. We show the scalability on two supercomputers, Mira (up to 1M Blue Gene/Q cores) and Titan (up to 128K Opteron cores and 8K GPUs), that can trace billions of particles in seconds.« less

  9. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less

  10. Toward Improving Predictability of Extreme Hydrometeorological Events: the Use of Multi-scale Climate Modeling in the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.

    2014-12-01

    Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.

  11. SALGOT - Stroke Arm Longitudinal study at the University of Gothenburg, prospective cohort study protocol

    PubMed Central

    2011-01-01

    Background Recovery patterns of upper extremity motor function have been described in several longitudinal studies, but most of these studies have had selected samples, short follow up times or insufficient outcomes on motor function. The general understanding is that improvements in upper extremity occur mainly during the first month after the stroke incident and little if any, significant recovery can be gained after 3-6 months. The purpose of this study is to describe the recovery of upper extremity function longitudinally in a non-selected sample initially admitted to a stroke unit with first ever stroke, living in Gothenburg urban area. Methods/Design A sample of 120 participants with a first-ever stroke and impaired upper extremity function will be consecutively included from an acute stroke unit and followed longitudinally for one year. Assessments are performed at eight occasions: at day 3 and 10, week 3, 4 and 6, month 3, 6 and 12 after onset of stroke. The primary clinical outcome measures are Action Research Arm Test and Fugl-Meyer Assessment for Upper Extremity. As additional measures, two new computer based objective methods with kinematic analysis of arm movements are used. The ABILHAND questionnaire of manual ability, Stroke Impact Scale, grip strength, spasticity, pain, passive range of motion and cognitive function will be assessed as well. At one year follow up, two patient reported outcomes, Impact on Participation and Autonomy and EuroQol Quality of Life Scale, will be added to cover the status of participation and aspects of health related quality of life. Discussion This study comprises a non-selected population with first ever stroke and impaired arm function. Measurements are performed both using traditional clinical assessments as well as computer based measurement systems providing objective kinematic data. The ICF classification of functioning, disability and health is used as framework for the selection of assessment measures. The study design with several repeated measurements on motor function will give us more confident information about the recovery patterns after stroke. This knowledge is essential both for optimizing rehabilitation planning as well as providing important information to the patient about the recovery perspectives. Trial registration ClinicalTrials.gov: NCT01115348 PMID:21612620

  12. Evolution of Precipitation Extremes in Three Large Ensembles of Climate Simulations - Impact of Spatial and Temporal Resolutions

    NASA Astrophysics Data System (ADS)

    Martel, J. L.; Brissette, F.; Mailhot, A.; Wood, R. R.; Ludwig, R.; Frigon, A.; Leduc, M.; Turcotte, R.

    2017-12-01

    Recent studies indicate that the frequency and intensity of extreme precipitation will increase in future climate due to global warming. In this study, we compare annual maxima precipitation series from three large ensembles of climate simulations at various spatial and temporal resolutions. The first two are at the global scale: the Canadian Earth System Model (CanESM2) 50-member large ensemble (CanESM2-LE) at a 2.8° resolution and the Community Earth System Model (CESM1) 40-member large ensemble (CESM1-LE) at a 1° resolution. The third ensemble is at the regional scale over both Eastern North America and Europe: the Canadian Regional Climate Model (CRCM5) 50-member large ensemble (CRCM5-LE) at a 0.11° resolution, driven at its boundaries by the CanESM-LE. The CRCM5-LE is a new ensemble issued from the ClimEx project (http://www.climex-project.org), a Québec-Bavaria collaboration. Using these three large ensembles, change in extreme precipitations over the historical (1980-2010) and future (2070-2100) periods are investigated. This results in 1 500 (30 years x 50 members for CanESM2-LE and CRCM5-LE) and 1200 (30 years x 40 members for CESM1-LE) simulated years over both the historical and future periods. Using these large datasets, the empirical daily (and sub-daily for CRCM5-LE) extreme precipitation quantiles for large return periods ranging from 2 to 100 years are computed. Results indicate that daily extreme precipitations generally will increase over most land grid points of both domains according to the three large ensembles. Regarding the CRCM5-LE, the increase in sub-daily extreme precipitations will be even more important than the one observed for daily extreme precipitations. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety.

  13. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  14. A Multi-Scale Approach to Airway Hyperresponsiveness: From Molecule to Organ

    PubMed Central

    Lauzon, Anne-Marie; Bates, Jason H. T.; Donovan, Graham; Tawhai, Merryn; Sneyd, James; Sanderson, Michael J.

    2012-01-01

    Airway hyperresponsiveness (AHR), a characteristic of asthma that involves an excessive reduction in airway caliber, is a complex mechanism reflecting multiple processes that manifest over a large range of length and time scales. At one extreme, molecular interactions determine the force generated by airway smooth muscle (ASM). At the other, the spatially distributed constriction of the branching airways leads to breathing difficulties. Similarly, asthma therapies act at the molecular scale while clinical outcomes are determined by lung function. These extremes are linked by events operating over intermediate scales of length and time. Thus, AHR is an emergent phenomenon that limits our understanding of asthma and confounds the interpretation of studies that address physiological mechanisms over a limited range of scales. A solution is a modular computational model that integrates experimental and mathematical data from multiple scales. This includes, at the molecular scale, kinetics, and force production of actin-myosin contractile proteins during cross-bridge and latch-state cycling; at the cellular scale, Ca2+ signaling mechanisms that regulate ASM force production; at the tissue scale, forces acting between contracting ASM and opposing viscoelastic tissue that determine airway narrowing; at the organ scale, the topographic distribution of ASM contraction dynamics that determine mechanical impedance of the lung. At each scale, models are constructed with iterations between theory and experimentation to identify the parameters that link adjacent scales. This modular model establishes algorithms for modeling over a wide range of scales and provides a framework for the inclusion of other responses such as inflammation or therapeutic regimes. The goal is to develop this lung model so that it can make predictions about bronchoconstriction and identify the pathophysiologic mechanisms having the greatest impact on AHR and its therapy. PMID:22701430

  15. Social percolation models

    NASA Astrophysics Data System (ADS)

    Solomon, Sorin; Weisbuch, Gerard; de Arcangelis, Lucilla; Jan, Naeem; Stauffer, Dietrich

    2000-03-01

    We here relate the occurrence of extreme market shares, close to either 0 or 100%, in the media industry to a percolation phenomenon across the social network of customers. We further discuss the possibility of observing self-organized criticality when customers and cinema producers adjust their preferences and the quality of the produced films according to previous experience. Comprehensive computer simulations on square lattices do indeed exhibit self-organized criticality towards the usual percolation threshold and related scaling behaviour.

  16. Opening Remarks: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2007-09-01

    Good morning. Welcome to Boston, the home of the Red Sox, Celtics and Bruins, baked beans, tea parties, Robert Parker, and SciDAC 2007. A year ago I stood before you to share the legacy of the first SciDAC program and identify the challenges that we must address on the road to petascale computing—a road E E Cummins described as `. . . never traveled, gladly beyond any experience.' Today, I want to explore the preparations for the rapidly approaching extreme scale (X-scale) generation. These preparations are the first step propelling us along the road of burgeoning scientific discovery enabled by the application of X- scale computing. We look to petascale computing and beyond to open up a world of discovery that cuts across scientific fields and leads us to a greater understanding of not only our world, but our universe. As part of the President's America Competitiveness Initiative, the ASCR Office has been preparing a ten year vision for computing. As part of this planning the LBNL together with ORNL and ANL hosted three town hall meetings on Simulation and Modeling at the Exascale for Energy, Ecological Sustainability and Global Security (E3). The proposed E3 initiative is organized around four programmatic themes: Engaging our top scientists, engineers, computer scientists and applied mathematicians; investing in pioneering large-scale science; developing scalable analysis algorithms, and storage architectures to accelerate discovery; and accelerating the build-out and future development of the DOE open computing facilities. It is clear that we have only just started down the path to extreme scale computing. Plan to attend Thursday's session on the out-briefing and discussion of these meetings. The road to the petascale has been at best rocky. In FY07, the continuing resolution provided 12% less money for Advanced Scientific Computing than either the President, the Senate, or the House. As a consequence, many of you had to absorb a no cost extension for your SciDAC work. I am pleased that the President's FY08 budget restores the funding for SciDAC. Quoting from Advanced Scientific Computing Research description in the House Energy and Water Development Appropriations Bill for FY08, "Perhaps no other area of research at the Department is so critical to sustaining U.S. leadership in science and technology, revolutionizing the way science is done and improving research productivity." As a society we need to revolutionize our approaches to energy, environmental and global security challenges. As we go forward along the road to the X-scale generation, the use of computation will continue to be a critical tool along with theory and experiment in understanding the behavior of the fundamental components of nature as well as for fundamental discovery and exploration of the behavior of complex systems. The foundation to overcome these societal challenges will build from the experiences and knowledge gained as you, members of our SciDAC research teams, work together to attack problems at the tera- and peta- scale. If SciDAC is viewed as an experiment for revolutionizing scientific methodology, then a strategic goal of ASCR program must be to broaden the intellectual base prepared to address the challenges of the new X-scale generation of computing. We must focus our computational science experiences gained over the past five years on the opportunities introduced with extreme scale computing. Our facilities are on a path to provide the resources needed to undertake the first part of our journey. Using the newly upgraded 119 teraflop Cray XT system at the Leadership Computing Facility, SciDAC research teams have in three days performed a 100-year study of the time evolution of the atmospheric CO2 concentration originating from the land surface. The simulation of the El Nino/Southern Oscillation which was part of this study has been characterized as `the most impressive new result in ten years' gained new insight into the behavior of superheated ionic gas in the ITER reactor as a result of an AORSA run on 22,500 processors that achieved over 87 trillion calculations per second (87 teraflops) which is 74% of the system's theoretical peak. Tomorrow, Argonne and IBM will announce that the first IBM Blue Gene/P, a 100 teraflop system, will be shipped to the Argonne Leadership Computing Facility later this fiscal year. By the end of FY2007 ASCR high performance and leadership computing resources will include the 114 teraflop IBM Blue Gene/P; a 102 teraflop Cray XT4 at NERSC and a 119 teraflop Cray XT system at Oak Ridge. Before ringing in the New Year, Oak Ridge will upgrade to 250 teraflops with the replacement of the dual core processors with quad core processors and Argonne will upgrade to between 250-500 teraflops, and next year, a petascale Cray Baker system is scheduled for delivery at Oak Ridge. The multidisciplinary teams in our SciDAC Centers for Enabling Technologies and our SciDAC Institutes must continue to work with our Scientific Application teams to overcome the barriers that prevent effective use of these new systems. These challenges include: the need for new algorithms as well as operating system and runtime software and tools which scale to parallel systems composed of hundreds of thousands processors; program development environments and tools which scale effectively and provide ease of use for developers and scientific end users; and visualization and data management systems that support moving, storing, analyzing, manipulating and visualizing multi-petabytes of scientific data and objects. The SciDAC Centers, located primarily at our DOE national laboratories will take the lead in ensuring that critical computer science and applied mathematics issues are addressed in a timely and comprehensive fashion and to address issues associated with research software lifecycle. In contrast, the SciDAC Institutes, which are university-led centers of excellence, will have more flexibility to pursue new research topics through a range of research collaborations. The Institutes will also work to broaden the intellectual and researcher base—conducting short courses and summer schools to take advantage of new high performance computing capabilities. The SciDAC Outreach Center at Lawrence Berkeley National Laboratory complements the outreach efforts of the SciDAC Institutes. The Outreach Center is our clearinghouse for SciDAC activities and resources and will communicate with the high performance computing community in part to understand their needs for workshops, summer schools and institutes. SciDAC is not ASCR's only effort to broaden the computational science community needed to meet the challenges of the new X-scale generation. I hope that you were able to attend the Computational Science Graduate Fellowship poster session last night. ASCR developed the fellowship in 1991 to meet the nation's growing need for scientists and technology professionals with advanced computer skills. CSGF, now jointly funded between ASCR and NNSA, is more than a traditional academic fellowship. It has provided more than 200 of the best and brightest graduate students with guidance, support and community in preparing them as computational scientists. Today CSGF alumni are bringing their diverse top-level skills and knowledge to research teams at DOE laboratories and in industries such as Proctor and Gamble, Lockheed Martin and Intel. At universities they are working to train the next generation of computational scientists. To build on this success, we intend to develop a wholly new Early Career Principal Investigator's (ECPI) program. Our objective is to stimulate academic research in scientific areas within ASCR's purview especially among faculty in early stages of their academic careers. Last February, we lost Ken Kennedy, one of the leading lights of our community. As we move forward into the extreme computing generation, his vision and insight will be greatly missed. In memorial to Ken Kennedy, we shall designate the ECPI grants to beginning faculty in Computer Science as the Ken Kennedy Fellowship. Watch the ASCR website for more information about ECPI and other early career programs in the computational sciences. We look to you, our scientists, researchers, and visionaries to take X-scale computing and use it to explode scientific discovery in your fields. We at SciDAC will work to ensure that this tool is the sharpest and most precise and efficient instrument to carve away the unknown and reveal the most exciting secrets and stimulating scientific discoveries of our time. The partnership between research and computing is the marriage that will spur greater discovery, and as Spencer said to Susan in Robert Parker's novel, `Sudden Mischief', `We stick together long enough, and we may get as smart as hell'. Michael Strayer

  17. Highlights of X-Stack ExM Deliverable Swift/T

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wozniak, Justin M.

    Swift/T is a key success from the ExM: System support for extreme-scale, many-task applications1 X-Stack project, which proposed to use concurrent dataflow as an innovative programming model to exploit extreme parallelism in exascale computers. The Swift/T component of the project reimplemented the Swift language from scratch to allow applications that compose scientific modules together to be build and run on available petascale computers (Blue Gene, Cray). Swift/T does this via a new compiler and runtime that generates and executes the application as an MPI program. We assume that mission-critical emerging exascale applications will be composed as scalable applications using existingmore » software components, connected by data dependencies. Developers wrap native code fragments using a higherlevel language, then build composite applications to form a computational experiment. This exemplifies hierarchical concurrency: lower-level messaging libraries are used for fine-grained parallelism; highlevel control is used for inter-task coordination. These patterns are best expressed with dataflow, but static DAGs (i.e., other workflow languages) limit the applications that can be built; they do not provide the expressiveness of Swift, such as conditional execution, iteration, and recursive functions.« less

  18. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    NASA Astrophysics Data System (ADS)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  19. Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes

    NASA Astrophysics Data System (ADS)

    Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.

    2017-12-01

    We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.

  20. Toward a theoretical framework for trustworthy cyber sensing

    NASA Astrophysics Data System (ADS)

    Xu, Shouhuai

    2010-04-01

    Cyberspace is an indispensable part of the economy and society, but has been "polluted" with many compromised computers that can be abused to launch further attacks against the others. Since it is likely that there always are compromised computers, it is important to be aware of the (dynamic) cyber security-related situation, which is however challenging because cyberspace is an extremely large-scale complex system. Our project aims to investigate a theoretical framework for trustworthy cyber sensing. With the perspective of treating cyberspace as a large-scale complex system, the core question we aim to address is: What would be a competent theoretical (mathematical and algorithmic) framework for designing, analyzing, deploying, managing, and adapting cyber sensor systems so as to provide trustworthy information or input to the higher layer of cyber situation-awareness management, even in the presence of sophisticated malicious attacks against the cyber sensor systems?

  1. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  2. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    NASA Astrophysics Data System (ADS)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  4. Optimized Materials From First Principles Simulations: Are We There Yet?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galli, G; Gygi, F

    2005-07-26

    In the past thirty years, the use of scientific computing has become pervasive in all disciplines: collection and interpretation of most experimental data is carried out using computers, and physical models in computable form, with various degrees of complexity and sophistication, are utilized in all fields of science. However, full prediction of physical and chemical phenomena based on the basic laws of Nature, using computer simulations, is a revolution still in the making, and it involves some formidable theoretical and computational challenges. We illustrate the progress and successes obtained in recent years in predicting fundamental properties of materials in condensedmore » phases and at the nanoscale, using ab-initio, quantum simulations. We also discuss open issues related to the validation of the approximate, first principles theories used in large scale simulations, and the resulting complex interplay between computation and experiment. Finally, we describe some applications, with focus on nanostructures and liquids, both at ambient and under extreme conditions.« less

  5. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    PubMed

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  6. Magnetic and velocity fields in a dynamo operating at extremely small Ekman and magnetic Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Šimkanin, Ján; Kyselica, Juraj

    2017-12-01

    Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.

  7. Non-Gaussian Nature of Fracture and the Survival of Fat-Tail Exponents

    NASA Astrophysics Data System (ADS)

    Tallakstad, Ken Tore; Toussaint, Renaud; Santucci, Stephane; Måløy, Knut Jørgen

    2013-04-01

    We study the fluctuations of the global velocity Vl(t), computed at various length scales l, during the intermittent mode-I propagation of a crack front. The statistics converge to a non-Gaussian distribution, with an asymmetric shape and a fat tail. This breakdown of the central limit theorem (CLT) is due to the diverging variance of the underlying local crack front velocity distribution, displaying a power law tail. Indeed, by the application of a generalized CLT, the full shape of our experimental velocity distribution at large scale is shown to follow the stable Levy distribution, which preserves the power law tail exponent under upscaling. This study aims to demonstrate in general for crackling noise systems how one can infer the complete scale dependence of the activity—and extreme event distributions—by measuring only at a global scale.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carothers, Christopher D.; Meredith, Jeremy S.; Blanco, Marc

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, thatmore » combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when executing both torus and dragonfly network models on up to 4K Blue Gene/Q nodes using 32K MPI ranks, Durango also avoids the overheads and complexities associated with extreme-scale trace files.« less

  9. Effects of packet retransmission with finite packet lifetime on traffic capacity in scale-free networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Ma, Jian-Feng

    Existing routing strategies such as the global dynamic routing [X. Ling, M. B. Hu, R. Jiang and Q. S. Wu, Phys. Rev. E 81, 016113 (2010)] can achieve very high traffic capacity at the cost of extremely long packet traveling delay. In many real complex networks, especially for real-time applications such as the instant communication software, extremely long packet traveling time is unacceptable. In this work, we propose to assign a finite Time-to-Live (TTL) parameter for each packet. To guarantee every packet to arrive at its destination within its TTL, we assume that a packet is retransmitted by its source once its TTL expires. We employ source routing mechanisms in the traffic model to avoid the routing-flaps induced by the global dynamic routing. We compose extensive simulations to verify our proposed mechanisms. With small TTL, the effects of packet retransmission on network traffic capacity are obvious, and the phase transition from flow free state to congested state occurs. For the purpose of reducing the computation frequency of the routing table, we employ a computing cycle Tc within which the routing table is recomputed once. The simulation results show that the traffic capacity decreases with increasing Tc. Our work provides a good insight into the understanding of effects of packet retransmission with finite packet lifetime on traffic capacity in scale-free networks.

  10. A Computer-Adaptive Disability Instrument for Lower Extremity Osteoarthritis Research Demonstrated Promising Breadth, Precision and Reliability

    PubMed Central

    Jette, Alan M.; McDonough, Christine M.; Haley, Stephen M.; Ni, Pengsheng; Olarsch, Sippy; Latham, Nancy; Hambleton, Ronald K.; Felson, David; Kim, Young-jo; Hunter, David

    2012-01-01

    Objective To develop and evaluate a prototype measure (OA-DISABILITY-CAT) for osteoarthritis research using Item Response Theory (IRT) and Computer Adaptive Test (CAT) methodologies. Study Design and Setting We constructed an item bank consisting of 33 activities commonly affected by lower extremity (LE) osteoarthritis. A sample of 323 adults with LE osteoarthritis reported their degree of limitation in performing everyday activities and completed the Health Assessment Questionnaire-II (HAQ-II). We used confirmatory factor analyses to assess scale unidimensionality and IRT methods to calibrate the items and examine the fit of the data. Using CAT simulation analyses, we examined the performance of OA-DISABILITY-CATs of different lengths compared to the full item bank and the HAQ-II. Results One distinct disability domain was identified. The 10-item OA-DISABILITY-CAT demonstrated a high degree of accuracy compared with the full item bank (r=0.99). The item bank and the HAQ-II scales covered a similar estimated scoring range. In terms of reliability, 95% of OA-DISABILITY reliability estimates were over 0.83 versus 0.60 for the HAQ-II. Except at the highest scores the 10-item OA-DISABILITY-CAT demonstrated superior precision to the HAQ-II. Conclusion The prototype OA-DISABILITY-CAT demonstrated promising measurement properties compared to the HAQ-II, and is recommended for use in LE osteoarthritis research. PMID:19216052

  11. Computational Cosmology at the Bleeding Edge

    NASA Astrophysics Data System (ADS)

    Habib, Salman

    2013-04-01

    Large-area sky surveys are providing a wealth of cosmological information to address the mysteries of dark energy and dark matter. Observational probes based on tracking the formation of cosmic structure are essential to this effort, and rely crucially on N-body simulations that solve the Vlasov-Poisson equation in an expanding Universe. As statistical errors from survey observations continue to shrink, and cosmological probes increase in number and complexity, simulations are entering a new regime in their use as tools for scientific inference. Changes in supercomputer architectures provide another rationale for developing new parallel simulation and analysis capabilities that can scale to computational concurrency levels measured in the millions to billions. In this talk I will outline the motivations behind the development of the HACC (Hardware/Hybrid Accelerated Cosmology Code) extreme-scale cosmological simulation framework and describe its essential features. By exploiting a novel algorithmic structure that allows flexible tuning across diverse computer architectures, including accelerated and many-core systems, HACC has attained a performance of 14 PFlops on the IBM BG/Q Sequoia system at 69% of peak, using more than 1.5 million cores.

  12. Synchronization and Causality Across Time-scales: Complex Dynamics and Extremes in El Niño/Southern Oscillation

    NASA Astrophysics Data System (ADS)

    Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.

    2017-12-01

    A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.

  13. In Situ Methods, Infrastructures, and Applications on High Performance Computing Platforms, a State-of-the-art (STAR) Report

    DOE PAGES

    Bethel, EW; Bauer, A; Abbasi, H; ...

    2016-06-10

    The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i.e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed /visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU’s and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitionersmore » using in situ methods in extreme-scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.« less

  14. Generative Representations for Computer-Automated Design Systems

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2004-01-01

    With the increasing computational power of Computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design programs is the representation with which they encode designs. If the representation cannot encode a certain design, then the design program cannot produce it. Similarly, a poor representation makes some types of designs extremely unlikely to be created. Here we define generative representations as those representations which can create and reuse organizational units within a design and argue that reuse is necessary for design systems to scale to more complex and interesting designs. To support our argument we describe GENRE, an evolutionary design program that uses both a generative and a non-generative representation, and compare the results of evolving designs with both types of representations.

  15. Future Extreme Heat Scenarios to Enable the Assessment of Climate Impacts on Public Health over the Coterminous U.S

    NASA Astrophysics Data System (ADS)

    Quattrochi, D. A.; Crosson, W. L.; Al-Hamdan, M. Z.; Estes, M. G., Jr.

    2013-12-01

    In the United States, extreme heat is the most deadly weather-related hazard. In the face of a warming climate and urbanization, which contributes to local-scale urban heat islands, it is very likely that extreme heat events (EHEs) will become more common and more severe in the U.S. This research seeks to provide historical and future measures of climate-driven extreme heat events to enable assessments of the impacts of heat on public health over the coterminous U.S. We use atmospheric temperature and humidity information from meteorological reanalysis and from Global Climate Models (GCMs) to provide data on past and future heat events. The focus of research is on providing assessments of the magnitude, frequency and geographic distribution of extreme heat in the U.S. to facilitate public health studies. In our approach, long-term climate change is captured with GCM outputs, and the temporal and spatial characteristics of short-term extremes are represented by the reanalysis data. Two future time horizons for 2040 and 2090 are compared to the recent past period of 1981-2000. We characterize regional-scale temperature and humidity conditions using GCM outputs for two climate change scenarios (A2 and A1B) defined in the Special Report on Emissions Scenarios (SRES). For each future period, 20 years of multi-model GCM outputs are analyzed to develop a ';heat stress climatology' based on statistics of extreme heat indicators. Differences between the two future and the past period are used to define temperature and humidity changes on a monthly time scale and regional spatial scale. These changes are combined with the historical meteorological data, which is hourly and at a spatial scale (12 km) much finer than that of GCMs, to create future climate realizations. From these realizations, we compute the daily heat stress measures and related spatially-specific climatological fields, such as the mean annual number of days above certain thresholds of maximum and minimum air temperatures, heat indices, and a new heat stress variable developed as part of this research that gives an integrated measure of heat stress (and relief) over the course of a day. Comparisons are made between projected (2040 and 2090) and past (1990) heat stress statistics. Outputs are aggregated to the county level, which is a popular scale of analysis for public health interests. County-level statistics are made available to public health researchers by the Centers for Disease Control and Prevention (CDC) via the Wide-ranging Online Data for Epidemiologic Research (WONDER) system. This addition of heat stress measures to CDC WONDER allows decision and policy makers to assess the impact of alternative approaches to optimize the public health response to EHEs. Through CDC WONDER, users are able to spatially and temporally query public health and heat-related data sets and create county-level maps and statistical charts of such data across the coterminous U.S.

  16. Future Extreme Heat Scenarios to Enable the Assessment of Climate Impacts on Public Health over the Coterminous U.S.

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Crosson, William L.; Al-Hamdan, Mohammad Z.; Estes, Maurice G., Jr.

    2013-01-01

    In the United States, extreme heat is the most deadly weather-related hazard. In the face of a warming climate and urbanization, which contributes to local-scale urban heat islands, it is very likely that extreme heat events (EHEs) will become more common and more severe in the U.S. This research seeks to provide historical and future measures of climate-driven extreme heat events to enable assessments of the impacts of heat on public health over the coterminous U.S. We use atmospheric temperature and humidity information from meteorological reanalysis and from Global Climate Models (GCMs) to provide data on past and future heat events. The focus of research is on providing assessments of the magnitude, frequency and geographic distribution of extreme heat in the U.S. to facilitate public health studies. In our approach, long-term climate change is captured with GCM outputs, and the temporal and spatial characteristics of short-term extremes are represented by the reanalysis data. Two future time horizons for 2040 and 2090 are compared to the recent past period of 1981- 2000. We characterize regional-scale temperature and humidity conditions using GCM outputs for two climate change scenarios (A2 and A1B) defined in the Special Report on Emissions Scenarios (SRES). For each future period, 20 years of multi-model GCM outputs are analyzed to develop a 'heat stress climatology' based on statistics of extreme heat indicators. Differences between the two future and the past period are used to define temperature and humidity changes on a monthly time scale and regional spatial scale. These changes are combined with the historical meteorological data, which is hourly and at a spatial scale (12 km), to create future climate realizations. From these realizations, we compute the daily heat stress measures and related spatially-specific climatological fields, such as the mean annual number of days above certain thresholds of maximum and minimum air temperatures, heat indices and a new heat stress variable developed as part of this research that gives an integrated measure of heat stress (and relief) over the course of a day. Comparisons are made between projected (2040 and 2090) and past (1990) heat stress statistics. Outputs are aggregated to the county level, which is a popular scale of analysis for public health interests. County-level statistics are made available to public health researchers by the Centers for Disease Control and Prevention (CDC) via the Wideranging Online Data for Epidemiologic Research (WONDER) system. This addition of heat stress measures to CDC WONDER allows decision and policy makers to assess the impact of alternative approaches to optimize the public health response to EHEs. Through CDC WONDER, users are able to spatially and temporally query public health and heat-related data sets and create county-level maps and statistical charts of such data across the coterminous U.S

  17. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Wes

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less

  18. The Influence of Recurrent Modes of Climate Variability on the Occurrence of Monthly Temperature Extremes Over South America

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Detzer, Judah; Mechoso, Carlos R.; Lee, Huikyo; Barkhordarian, Armineh

    2017-10-01

    The associations between extreme temperature months and four prominent modes of recurrent climate variability are examined over South America. Associations are computed as the percent of extreme temperature months concurrent with the upper and lower quartiles of the El Niño-Southern Oscillation (ENSO), the Atlantic Niño, the Pacific Decadal Oscillation (PDO), and the Southern Annular Mode (SAM) index distributions, stratified by season. The relationship is strongest for ENSO, with nearly every extreme temperature month concurrent with the upper or lower quartiles of its distribution in portions of northwestern South America during some seasons. The likelihood of extreme warm temperatures is enhanced over parts of northern South America when the Atlantic Niño index is in the upper quartile, while cold extremes are often association with the lowest quartile. Concurrent precipitation anomalies may contribute to these relations. The PDO shows weak associations during December, January, and February, while in June, July, and August its relationship with extreme warm temperatures closely matches that of ENSO. This may be due to the positive relationship between the PDO and ENSO, rather than the PDO acting as an independent physical mechanism. Over Patagonia, the SAM is highly influential during spring and fall, with warm and cold extremes being associated with positive and negative phases of the SAM, respectively. Composites of sea level pressure anomalies for extreme temperature months over Patagonia suggest an important role of local synoptic scale weather variability in addition to a favorable SAM for the occurrence of these extremes.

  19. Extreme weather: Subtropical floods and tropical cyclones

    NASA Astrophysics Data System (ADS)

    Shaevitz, Daniel A.

    Extreme weather events have a large effect on society. As such, it is important to understand these events and to project how they may change in a future, warmer climate. The aim of this thesis is to develop a deeper understanding of two types of extreme weather events: subtropical floods and tropical cyclones (TCs). In the subtropics, the latitude is high enough that quasi-geostrophic dynamics are at least qualitatively relevant, while low enough that moisture may be abundant and convection strong. Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. In the first part of this thesis, I examine the possible triggering of convection by the large-scale dynamics and investigate the coupling between the two. Specifically two examples of extreme precipitation events in the subtropics are analyzed, the 2010 and 2014 floods of India and Pakistan and the 2015 flood of Texas and Oklahoma. I invert the quasi-geostrophic omega equation to decompose the large-scale vertical motion profile to components due to synoptic forcing and diabatic heating. Additionally, I present model results from within the Column Quasi-Geostrophic framework. A single column model and cloud-revolving model are forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. It is found that convection was triggered primarily by mechanically forced orographic ascent over the Himalayas during the India/Pakistan flood and by upper-level Potential Vorticity disturbances during the Texas/Oklahoma flood. Furthermore, a climate attribution analysis was conducted for the Texas/Oklahoma flood and it is found that anthropogenic climate change was responsible for a small amount of rainfall during the event but the intensity of this event may be greatly increased if it occurs in a future climate. In the second part of this thesis, I examine the ability of high-resolution global atmospheric models to simulate TCs. Specifically, I present an intercomparison of several models' ability to simulate the global characteristics of TCs in the current climate. This is a necessary first step before using these models to project future changes in TCs. Overall, the models were able to reproduce the geographic distribution of TCs reasonably well, with some of the models performing remarkably well. The intensity of TCs varied widely between the models, with some of this difference being due to model resolution.

  20. Exact Extremal Statistics in the Classical 1D Coulomb Gas

    NASA Astrophysics Data System (ADS)

    Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory

    2017-08-01

    We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.

  1. Overcoming complexities for consistent, continental-scale flood mapping

    NASA Astrophysics Data System (ADS)

    Smith, Helen; Zaidman, Maxine; Davison, Charlotte

    2013-04-01

    The EU Floods Directive requires all member states to produce flood hazard maps by 2013. Although flood mapping practices are well developed in Europe, there are huge variations in the scale and resolution of the maps between individual countries. Since extreme flood events are rarely confined to a single country, this is problematic, particularly for the re/insurance industry whose exposures often extend beyond country boundaries. Here, we discuss the challenges of large-scale hydrological and hydraulic modelling, using our experience of developing a 12-country model and set of maps, to illustrate how consistent, high-resolution river flood maps across Europe can be produced. The main challenges addressed include: data acquisition; manipulating the vast quantities of high-resolution data; and computational resources. Our starting point was to develop robust flood-frequency models that are suitable for estimating peak flows for a range of design flood return periods. We used the index flood approach, based on a statistical analysis of historic river flow data pooled on the basis of catchment characteristics. Historical flow data were therefore sourced for each country and collated into a large pan-European database. After a lengthy validation these data were collated into 21 separate analysis zones or regions, grouping smaller river basins according to their physical and climatic characteristics. The very large continental scale basins were each modelled separately on account of their size (e.g. Danube, Elbe, Drava and Rhine). Our methodology allows the design flood hydrograph to be predicted at any point on the river network for a range of return periods. Using JFlow+, JBA's proprietary 2D hydraulic hydrodynamic model, the calculated out-of-bank flows for all watercourses with an upstream drainage area exceeding 50km2 were routed across two different Digital Terrain Models in order to map the extent and depth of floodplain inundation. This generated modelling for a total river length of approximately 250,000km. Such a large-scale, high-resolution modelling exercise is extremely demanding on computational resources and would have been unfeasible without the use of Graphics Processing Units on a network of standard specification gaming computers. Our GPU grid is the world's largest flood-dedicated computer grid. The European river basins were split out into approximately 100 separate hydraulic models and managed individually, although care was taken to ensure flow continuity was maintained between models. The flood hazard maps from the modelling were pieced together using GIS techniques, to provide flood depth and extent information across Europe to a consistent scale and standard. After discussing the methodological challenges, we shall present our flood hazard maps and, from extensive validation work, compare these against historical flow records and observed flood extents.

  2. Computer Modeling of the Thermal Conductivity of Cometary Ice

    NASA Technical Reports Server (NTRS)

    Bunch, Theodore E.; Wilson, Michael A.; Pohorille, Andrew

    1998-01-01

    The thermal conductivity was found to be only weakly dependent on the microstructure of the amorphous ice. In general, the amorphous ices were found to have thermal conductivities of the same order of magnitude as liquid water. This is in contradiction to recent experimental estimates of the thermal conductivity of amorphous ice, and it is suggested that the extremely low value obtained experimentally is due to larger-scale defects in the ice, such as cracks, but is not an intrinsic property of the bulk amorphous ice.

  3. Turbulent transport measurements with a laser Doppler velocimeter.

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.

    1972-01-01

    The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time-average spectrum contains information only for times shorter than the Lagrangian-integral time scale of the turbulence. To examine the long-time behavior, one must use either extremely small scattering angles, much-longer-wavelength radiation, or a different mode of signal analysis, e.g., FM detection.

  4. Compilation of 1986 Annual Reports of the Navy ELF (Extremely Low Frequency) Communications System Ecological Monitoring Program. Volume 3. TABS H-J.

    DTIC Science & Technology

    1987-07-01

    yearly trends, and the effect of population size and abiotic factors on growth will be completed when the 1984-1986 scales are completed. Fish condition...settling of suspended particles on substrates in its absence. The pumps were powered by a heavy duty marine battery which had to be exchanged and...computer using procedures available in SPSS (Hull and Nie 1981) and programs available in the BIOM statistical package (Rohlf). Sokal and Rohlf (1981

  5. TECA: Petascale pattern recognition for climate science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, .; Byna, Surendra; Vishwanath, Venkatram

    Climate Change is one of the most pressing challenges facing humanity in the 21st century. Climate simulations provide us with a unique opportunity to examine effects of anthropogenic emissions. Highresolution climate simulations produce “Big Data”: contemporary climate archives are ≈ 5PB in size and we expect future archives to measure on the order of Exa-Bytes. In this work, we present the successful application of TECA (Toolkit for Extreme Climate Analysis) framework, for extracting extreme weather patterns such as Tropical Cyclones, Atmospheric Rivers and Extra-Tropical Cyclones from TB-sized simulation datasets. TECA has been run at full-scale on Cray XE6 and IBMmore » BG/Q systems, and has reduced the runtime for pattern detection tasks from years to hours. TECA has been utilized to evaluate the performance of various computational models in reproducing the statistics of extreme weather events, and for characterizing the change in frequency of storm systems in the future.« less

  6. Robotic insects: Manufacturing, actuation, and power considerations

    NASA Astrophysics Data System (ADS)

    Wood, Robert

    2015-12-01

    As the characteristic size of a flying robot decreases, the challenges for successful flight revert to basic questions of fabrication, actuation, fluid mechanics, stabilization, and power - whereas such questions have in general been answered for larger aircraft. When developing a robot on the scale of a housefly, all hardware must be developed from scratch as there is nothing "off-the-shelf" which can be used for mechanisms, sensors, or computation that would satisfy the extreme mass and power limitations. With these challenges in mind, this talk will present progress in the essential technologies for insect-like robots with an emphasis on multi-scale manufacturing methods, high power density actuation, and energy-efficient power distribution.

  7. A comparative analysis of support vector machines and extreme learning machines.

    PubMed

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Computing the proton aurora at early Mars

    NASA Astrophysics Data System (ADS)

    Lovato, K.; Gronoff, G.; Curry, S.; Simon Wedlund, C.; Moore, W. B.

    2017-12-01

    In the early Solar System, ( 4 Gyr ago) our Sun was 70% less luminous than what is seen today but much more active. Indeed, for young stars, solar flares occurs more frequently and therefore so do coronal mass ejections and solar energetic particle events. With an increase in solar events, the flux of protons becomes extremely high, and affects planetary atmosphere in a more extreme way as today. Proton precipitation on planets has an impact on the energy balance of their upper atmospheres, can affect the photochemistry and create auroral emissions. Understanding the protons precipitation at the early Mars can help in understanding occurring chemical process as well as atmospheric evolution and escape. We concentrated our effort on the proton up to a MeV since they have the most important influence on the upper atmosphere. Using scaling laws, we estimated the proton flux for the Early Mars up to a MeV. A kinetic 1D code, validated for the current Mars, was used to compute the effects of the low energy protons precipitation on the Early Mars. This model solves the coupled H+/H multi-stream dissipative transport equation as well as the transport of the secondary electron. For the Early Mars, it allowed to compute the magnitude of the proton Aurora, as well as the corresponding upwards H flux.

  9. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    NASA Astrophysics Data System (ADS)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are being simulated for plausible physical reasons, boosting confidence in future projections of temperature extremes. Conversely, where model skill is identified to be lower, caution should be exercised in interpreting future projections.

  10. Why should we care about the top quark Yukawa coupling?

    DOE PAGES

    Shapshnikov, Mikhail; Bezrukov, Fedor

    2015-04-15

    In the cosmological context, for the Standard Model to be valid up to the scale of inflation, the top quark Yukawa coupling y t should not exceed the critical value y t crit , coinciding with good precision (about 0.2‰) with the requirement of the stability of the electroweak vacuum. So, the exact measurements of y t may give an insight on the possible existence and the energy scale of new physics above 100 GeV, which is extremely sensitive to y t. In this study, we overview the most recent theoretical computations of and the experimental measurements of y tmore » crit and the experimental measurements of y t. Within the theoretical and experimental uncertainties in y t, the required scale of new physics varies from 10⁷ GeV to the Planck scale, urging for precise determination of the top quark Yukawa coupling.« less

  11. Scaled Jump in Gravity-Reduced Virtual Environments.

    PubMed

    Kim, MyoungGon; Cho, Sunglk; Tran, Tanh Quang; Kim, Seong-Pil; Kwon, Ohung; Han, JungHyun

    2017-04-01

    The reduced gravity experienced in lunar or Martian surfaces can be simulated on the earth using a cable-driven system, where the cable lifts a person to reduce his or her weight. This paper presents a novel cable-driven system designed for the purpose. It is integrated with a head-mounted display and a motion capture system. Focusing on jump motion within the system, this paper proposes to scale the jump and reports the experiments made for quantifying the extent to which a jump can be scaled without the discrepancy between physical and virtual jumps being noticed by the user. With the tolerable range of scaling computed from these experiments, an application named retargeted jump is developed, where a user can jump up onto virtual objects while physically jumping in the real-world flat floor. The core techniques presented in this paper can be extended to develop extreme-sport simulators such as parasailing and skydiving.

  12. On the nonlinearity of spatial scales in extreme weather attribution statements

    NASA Astrophysics Data System (ADS)

    Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V.; Wehner, Michael; Shiogama, Hideo; Wolski, Piotr; Ciavarella, Andrew; Christidis, Nikolaos

    2018-04-01

    In the context of ongoing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporal scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.

  13. On the nonlinearity of spatial scales in extreme weather attribution statements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah

    In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less

  14. On the nonlinearity of spatial scales in extreme weather attribution statements

    DOE PAGES

    Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; ...

    2017-06-17

    In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less

  15. Evaluation of permeability and non-Darcy flow in vuggy macroporous limestone aquifer samples with lattice Boltzmann methods

    USGS Publications Warehouse

    Sukop, Michael C.; Huang, Haibo; Alvarez, Pedro F.; Variano, Evan A.; Cunningham, Kevin J.

    2013-01-01

    Lattice Boltzmann flow simulations provide a physics-based means of estimating intrinsic permeability from pore structure and accounting for inertial flow that leads to departures from Darcy's law. Simulations were used to compute intrinsic permeability where standard measurement methods may fail and to provide better understanding of departures from Darcy's law under field conditions. Simulations also investigated resolution issues. Computed tomography (CT) images were acquired at 0.8 mm interscan spacing for seven samples characterized by centimeter-scale biogenic vuggy macroporosity from the extremely transmissive sole-source carbonate karst Biscayne aquifer in southeastern Florida. Samples were as large as 0.3 m in length; 7–9 cm-scale-length subsamples were used for lattice Boltzmann computations. Macroporosity of the subsamples was as high as 81%. Matrix porosity was ignored in the simulations. Non-Darcy behavior led to a twofold reduction in apparent hydraulic conductivity as an applied hydraulic gradient increased to levels observed at regional scale within the Biscayne aquifer; larger reductions are expected under higher gradients near wells and canals. Thus, inertial flows and departures from Darcy's law may occur under field conditions. Changes in apparent hydraulic conductivity with changes in head gradient computed with the lattice Boltzmann model closely fit the Darcy-Forchheimer equation allowing estimation of the Forchheimer parameter. CT-scan resolution appeared adequate to capture intrinsic permeability; however, departures from Darcy behavior were less detectable as resolution coarsened.

  16. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  17. Preparing for Exascale: Towards convection-permitting, global atmospheric simulations with the Model for Prediction Across Scales (MPAS)

    NASA Astrophysics Data System (ADS)

    Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald

    2017-04-01

    With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.

  18. Dealing with Non-stationarity in Intensity-Frequency-Duration Curve

    NASA Astrophysics Data System (ADS)

    Rengaraju, S.; Rajendran, V.; C T, D.

    2017-12-01

    Extremes like flood and drought are becoming frequent and more vulnerable in recent times, generally attributed to the recent revelation of climate change. One of the main concerns is that whether the present infrastructures like dams, storm water drainage networks, etc., which were designed following the so called `stationary' assumption, are capable of withstanding the expected severe extremes. Stationary assumption considers that extremes are not changing with respect to time. However, recent studies proved that climate change has altered the climate extremes both temporally and spatially. Traditionally, the observed non-stationary in the extreme precipitation is incorporated in the extreme value distributions in terms of changing parameters. Nevertheless, this raises a question which parameter needs to be changed, i.e. location or scale or shape, since either one or more of these parameters vary at a given location. Hence, this study aims to detect the changing parameters to reduce the complexity involved in the development of non-stationary IDF curve and to provide the uncertainty bound of estimated return level using Bayesian Differential Evolutionary Monte Carlo (DE-MC) algorithm. Firstly, the extreme precipitation series is extracted using Peak Over Threshold. Then, the time varying parameter(s) is(are) detected for the extracted series using Generalized Additive Models for Location Scale and Shape (GAMLSS). Then, the IDF curve is constructed using Generalized Pareto Distribution incorporating non-stationarity only if the parameter(s) is(are) changing with respect to time, otherwise IDF curve will follow stationary assumption. Finally, the posterior probability intervals of estimated return revel are computed through Bayesian DE-MC approach and the non-stationary based IDF curve is compared with the stationary based IDF curve. The results of this study emphasize that the time varying parameters also change spatially and the IDF curves should incorporate non-stationarity only if there is change in the parameters, though there may be significant change in the extreme rainfall series. Our results evoke the importance of updating the infrastructure design strategies for the changing climate, by adopting the non-stationary based IDF curves.

  19. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    NASA Astrophysics Data System (ADS)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  20. Identifying Heat Waves in Florida: Considerations of Missing Weather Data

    PubMed Central

    Leary, Emily; Young, Linda J.; DuClos, Chris; Jordan, Melissa M.

    2015-01-01

    Background Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. Objectives To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. Methods In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Results Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Conclusions Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised. PMID:26619198

  1. Identifying Heat Waves in Florida: Considerations of Missing Weather Data.

    PubMed

    Leary, Emily; Young, Linda J; DuClos, Chris; Jordan, Melissa M

    2015-01-01

    Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised.

  2. Higher-order ice-sheet modelling accelerated by multigrid on graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian; Egholm, David

    2013-04-01

    Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.

  3. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less

  4. The family of anisotropically scaled equatorial waves

    NASA Astrophysics Data System (ADS)

    RamíRez GutiéRrez, Enver; da Silva Dias, Pedro Leite; Raupp, Carlos; Bonatti, Jose Paulo

    2011-04-01

    In the present work we introduce the family of anisotropic equatorial waves. This family corresponds to equatorial waves at intermediate states between the shallow water and the long wave approximation model. The new family is obtained by using anisotropic time/space scalings on the linearized, unforced and inviscid shallow water model. It is shown that the anisotropic equatorial waves tend to the solutions of the long wave model in one extreme and to the shallow water model solutions in the other extreme of the parameter dependency. Thus, the problem associated with the completeness of the long wave model solutions can be asymptotically addressed. The anisotropic dispersion relation is computed and, in addition to the typical dependency on the equivalent depth, meridional quantum number and zonal wavenumber, it also depends on the anisotropy between both zonal to meridional space and velocity scales as well as the fast to slow time scales ratio. For magnitudes of the scales compatible with those of the tropical region, both mixed Rossby-gravity and inertio-gravity waves are shifted to a moderately higher frequency and, consequently, not filtered out. This draws attention to the fact that, for completeness of the long wave like solutions, it is necessary to include both the anisotropic mixed Rossby-gravity and inertio-gravity waves. Furthermore, the connection of slow and fast manifolds (distinguishing feature of equatorial dynamics) is preserved, though modified for the equatorial anisotropy parameters used δ ∈ < 1]. New possibilities of horizontal and vertical scale nonlinear interactions are allowed. Thus, the anisotropic shallow water model is of fundamental importance for understanding multiscale atmosphere and ocean dynamics in the tropics.

  5. Reconstructing metabolic flux vectors from extreme pathways: defining the alpha-spectrum.

    PubMed

    Wiback, Sharon J; Mahadevan, Radhakrishnan; Palsson, Bernhard Ø

    2003-10-07

    The move towards genome-scale analysis of cellular functions has necessitated the development of analytical (in silico) methods to understand such large and complex biochemical reaction networks. One such method is extreme pathway analysis that uses stoichiometry and thermodynamic irreversibly to define mathematically unique, systemic metabolic pathways. These extreme pathways form the edges of a high-dimensional convex cone in the flux space that contains all the attainable steady state solutions, or flux distributions, for the metabolic network. By definition, any steady state flux distribution can be described as a nonnegative linear combination of the extreme pathways. To date, much effort has been focused on calculating, defining, and understanding these extreme pathways. However, little work has been performed to determine how these extreme pathways contribute to a given steady state flux distribution. This study represents an initial effort aimed at defining how physiological steady state solutions can be reconstructed from a network's extreme pathways. In general, there is not a unique set of nonnegative weightings on the extreme pathways that produce a given steady state flux distribution but rather a range of possible values. This range can be determined using linear optimization to maximize and minimize the weightings of a particular extreme pathway in the reconstruction, resulting in what we have termed the alpha-spectrum. The alpha-spectrum defines which extreme pathways can and cannot be included in the reconstruction of a given steady state flux distribution and to what extent they individually contribute to the reconstruction. It is shown that accounting for transcriptional regulatory constraints can considerably shrink the alpha-spectrum. The alpha-spectrum is computed and interpreted for two cases; first, optimal states of a skeleton representation of core metabolism that include transcriptional regulation, and second for human red blood cell metabolism under various physiological, non-optimal conditions.

  6. Project APhiD: A Lorenz-gauged A-Φ decomposition for parallelized computation of ultra-broadband electromagnetic induction in a fully heterogeneous Earth

    NASA Astrophysics Data System (ADS)

    Weiss, Chester J.

    2013-08-01

    An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.

  7. Comparison of the hedonic general Labeled Magnitude Scale with the hedonic 9-point scale.

    PubMed

    Kalva, Jaclyn J; Sims, Charles A; Puentes, Lorenzo A; Snyder, Derek J; Bartoshuk, Linda M

    2014-02-01

    The hedonic 9-point scale was designed to compare palatability among different food items; however, it has also been used occasionally to compare individuals and groups. Such comparisons can be invalid because scale labels (for example, "like extremely") can denote systematically different hedonic intensities across some groups. Addressing this problem, the hedonic general Labeled Magnitude Scale (gLMS) frames affective experience in terms of the strongest imaginable liking/disliking of any kind, which can yield valid group comparisons of food palatability provided extreme hedonic experiences are unrelated to food. For each scale, 200 panelists rated affect for remembered food products (including favorite and least favorite foods) and sampled foods; they also sampled taste stimuli (quinine, sucrose, NaCl, citric acid) and rated their intensity. Finally, subjects identified experiences representing the endpoints of the hedonic gLMS. Both scales were similar in their ability to detect within-subject hedonic differences across a range of food experiences, but group comparisons favored the hedonic gLMS. With the 9-point scale, extreme labels were strongly associated with extremes in food affect. In contrast, gLMS data showed that scale extremes referenced nonfood experiences. Perceived taste intensity significantly influenced differences in food liking/disliking (for example, those experiencing the most intense tastes, called supertasters, showed more extreme liking and disliking for their favorite and least favorite foods). Scales like the hedonic gLMS are suitable for across-group comparisons of food palatability. © 2014 Institute of Food Technologists®

  8. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solutionmore » can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.« less

  9. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Robert; Ang, James; Bergman, Keren

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a systemmore » that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.« less

  10. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  11. Nearly extremal apparent horizons in simulations of merging black holes

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey; Scheel, Mark; Owen, Robert; Giesler, Matthew; Katebi, Reza; Szilagyi, Bela; Chu, Tony; Demos, Nicholas; Hemberger, Daniel; Kidder, Lawrence; Pfeiffer, Harald; Afshari, Nousha; SXS Collaboration

    2015-04-01

    The spin S of a Kerr black hole is bounded by the surface area A of its apparent horizon: 8 πS <= A . We present recent results (arXiv:1411.7297) for the extremality of apparent horizons for merging, rapidly rotating black holes with equal masses and equal spins aligned with the orbital angular momentum. Measuring the area and (using approximate Killing vectors) the spin on the individual and common apparent horizons, we find that the inequality 8 πS < A is satisfied but is very close to equality on the common apparent horizon at the instant it first appears--even for initial spins as large as S /M2 = 0 . 994 . We compute the smallest value e0 that Booth and Fairhurst's extremality parameter can take for any scaling of the horizon's null normal vectors, concluding that the common horizons are at least moderately close to extremal just after they appear. We construct binary-black-hole initial data with marginally trapped surfaces with 8 πS > A and e0 > 1 , but these surfaces are always surrounded by apparent horizons with 8 πS < A and e0 < 1 .

  12. Content Range and Precision of a Computer Adaptive Test of Upper Extremity Function for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George, III; Mulcahey, M. J.

    2011-01-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized…

  13. Data Provenance Hybridization Supporting Extreme-Scale Scientific WorkflowApplications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elsethagen, Todd O.; Stephan, Eric G.; Raju, Bibi

    As high performance computing (HPC) infrastructures continue to grow in capability and complexity, so do the applications that they serve. HPC and distributed-area computing (DAC) (e.g. grid and cloud) users are looking increasingly toward workflow solutions to orchestrate their complex application coupling, pre- and post-processing needs To gain insight and a more quantitative understanding of a workflow’s performance our method includes not only the capture of traditional provenance information, but also the capture and integration of system environment metrics helping to give context and explanation for a workflow’s execution. In this paper, we describe IPPD’s provenance management solution (ProvEn) andmore » its hybrid data store combining both of these data provenance perspectives.« less

  14. Linking Excessive Heat with Daily Heat-Related Mortality over the Coterminous United States

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Crosson, William L.; Al-Hamdan, Mohammad Z.; Estes, Maurice G., Jr.

    2014-01-01

    In the United States, extreme heat is the most deadly weather-related hazard. In the face of a warming climate and urbanization, which contributes to local-scale urban heat islands, it is very likely that extreme heat events (EHEs) will become more common and more severe in the U.S. This research seeks to provide historical and future measures of climate-driven extreme heat events to enable assessments of the impacts of heat on public health over the coterminous U.S. We use atmospheric temperature and humidity information from meteorological reanalysis and from Global Climate Models (GCMs) to provide data on past and future heat events. The focus of research is on providing assessments of the magnitude, frequency and geographic distribution of extreme heat in the U.S. to facilitate public health studies. In our approach, long-term climate change is captured with GCM outputs, and the temporal and spatial characteristics of short-term extremes are represented by the reanalysis data. Two future time horizons for 2040 and 2090 are compared to the recent past period of 1981- 2000. We characterize regional-scale temperature and humidity conditions using GCM outputs for two climate change scenarios (A2 and A1B) defined in the Special Report on Emissions Scenarios (SRES). For each future period, 20 years of multi-model GCM outputs are analyzed to develop a 'heat stress climatology' based on statistics of extreme heat indicators. Differences between the two future and the past period are used to define temperature and humidity changes on a monthly time scale and regional spatial scale. These changes are combined with the historical meteorological data, which is hourly and at a spatial scale (12 km) much finer than that of GCMs, to create future climate realizations. From these realizations, we compute the daily heat stress measures and related spatially-specific climatological fields, such as the mean annual number of days above certain thresholds of maximum and minimum air temperatures, heat indices, and a new heat stress variable developed as part of this research that gives an integrated measure of heat stress (and relief) over the course of a day. Comparisons are made between projected (2040 and 2090) and past (1990) heat stress statistics. Outputs are aggregated to the county level, which is a popular scale of analysis for public health interests. County-level statistics are made available to public health researchers by the Centers for Disease Control and Prevention (CDC) via the Wide-ranging Online Data for Epidemiologic Research (WONDER) system. This addition of heat stress measures to CDC WONDER allows decision and policy makers to assess the impact of alternative approaches to optimize the public health response to EHEs. Through CDC WONDER, users are able to spatially and temporally query public health and heat-related data sets and create county-level maps and statistical charts of such data across the coterminous U.S.

  15. An Expanded Multi-scale Monte Carlo Simulation Method for Personalized Radiobiological Effect Estimation in Radiotherapy: a feasibility study

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Feng, Yuanming; Wang, Wei; Yang, Chengwen; Wang, Ping

    2017-03-01

    A novel and versatile “bottom-up” approach is developed to estimate the radiobiological effect of clinic radiotherapy. The model consists of multi-scale Monte Carlo simulations from organ to cell levels. At cellular level, accumulated damages are computed using a spectrum-based accumulation algorithm and predefined cellular damage database. The damage repair mechanism is modeled by an expanded reaction-rate two-lesion kinetic model, which were calibrated through replicating a radiobiological experiment. Multi-scale modeling is then performed on a lung cancer patient under conventional fractionated irradiation. The cell killing effects of two representative voxels (isocenter and peripheral voxel of the tumor) are computed and compared. At microscopic level, the nucleus dose and damage yields vary among all nucleuses within the voxels. Slightly larger percentage of cDSB yield is observed for the peripheral voxel (55.0%) compared to the isocenter one (52.5%). For isocenter voxel, survival fraction increase monotonically at reduced oxygen environment. Under an extreme anoxic condition (0.001%), survival fraction is calculated to be 80% and the hypoxia reduction factor reaches a maximum value of 2.24. In conclusion, with biological-related variations, the proposed multi-scale approach is more versatile than the existing approaches for evaluating personalized radiobiological effects in radiotherapy.

  16. Development of item bank to measure deliberate self-harm behaviours: facilitating tailored scales and computer adaptive testing for specific research and clinical purposes.

    PubMed

    Latimer, Shane; Meade, Tanya; Tennant, Alan

    2014-07-30

    The purpose of this study was to investigate the application of item banking to questionnaire items intended to measure Deliberate Self-Harm (DSH) behaviours. The Rasch measurement model was used to evaluate behavioural items extracted from seven published DSH scales administered to 568 Australians aged 18-30 years (62% university students, 21% mental health patients, and 17% community members). Ninety four items were calibrated in the item bank (including 12 items with differential item functioning for gender and age). Tailored scale construction was demonstrated by extracting scales covering different combinations of DSH methods but with the same raw score for each person location on the latent DSH construct. A simulated computer adaptive test (starting with common self-harm methods to minimise presentation of extreme behaviours) demonstrated that 11 items (on average) were needed to achieve a standard error of measurement of 0.387 (corresponding to a Cronbach׳s Alpha of 0.85). This study lays the groundwork for advancing DSH measurement to an item bank approach with the flexibility to measure a specific definitional orientation (e.g., non-suicidal self-injury) or a broad continuum of self-harmful acts, as appropriate to a particular research/clinical purpose. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    Most of today’s visualization libraries and applications are based off of what is known today as the visualization pipeline. In the visualization pipeline model, algorithms are encapsulated as “filtering” components with inputs and outputs. These components can be combined by connecting the outputs of one filter to the inputs of another filter. The visualization pipeline model is popular because it provides a convenient abstraction that allows users to combine algorithms in powerful ways. Unfortunately, the visualization pipeline cannot run effectively on exascale computers. Experts agree that the exascale machine will comprise processors that contain many cores. Furthermore, physical limitations willmore » prevent data movement in and out of the chip (that is, between main memory and the processing cores) from keeping pace with improvements in overall compute performance. To use these processors to their fullest capability, it is essential to carefully consider memory access. This is where the visualization pipeline fails. Each filtering component in the visualization library is expected to take a data set in its entirety, perform some computation across all of the elements, and output the complete results. The process of iterating over all elements must be repeated in each filter, which is one of the worst possible ways to traverse memory when trying to maximize the number of executions per memory access. This project investigates a new type of visualization framework that exhibits a pervasive parallelism necessary to run on exascale machines. Our framework achieves this by defining algorithms in terms of functors, which are localized, stateless operations. Functors can be composited in much the same way as filters in the visualization pipeline. But, functors’ design allows them to be concurrently running on massive amounts of lightweight threads. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale computer. This project concludes with a functional prototype containing pervasively parallel algorithms that perform demonstratively well on many-core processors. These algorithms are fundamental for performing data analysis and visualization at extreme scale.« less

  18. Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloutman, L.D.

    2000-04-01

    Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less

  19. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  20. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  1. Towards Highly Scalable Ab Initio Molecular Dynamics (AIMD) Simulations on the Intel Knights Landing Manycore Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacquelin, Mathias; De Jong, Wibe A.; Bylaska, Eric J.

    2017-07-03

    The Ab Initio Molecular Dynamics (AIMD) method allows scientists to treat the dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. This extremely important method has tremendous computational requirements, because the electronic Schr¨odinger equation, approximated using Kohn-Sham Density Functional Theory (DFT), is solved at every time step. With the advent of manycore architectures, application developers have a significant amount of processing power within each compute node that can only be exploited through massive parallelism. A compute intensive application such as AIMD forms a good candidate to leverage this processing power. In this paper, wemore » focus on adding thread level parallelism to the plane wave DFT methodology implemented in NWChem. Through a careful optimization of tall-skinny matrix products, which are at the heart of the Lagrange multiplier and nonlocal pseudopotential kernels, as well as 3D FFTs, our OpenMP implementation delivers excellent strong scaling on the latest Intel Knights Landing (KNL) processor. We assess the efficiency of our Lagrange multiplier kernels by building a Roofline model of the platform, and verify that our implementation is close to the roofline for various problem sizes. Finally, we present strong scaling results on the complete AIMD simulation for a 64 water molecules test case, that scales up to all 68 cores of the Knights Landing processor.« less

  2. Parallel algorithm for multiscale atomistic/continuum simulations using LAMMPS

    NASA Astrophysics Data System (ADS)

    Pavia, F.; Curtin, W. A.

    2015-07-01

    Deformation and fracture processes in engineering materials often require simultaneous descriptions over a range of length and time scales, with each scale using a different computational technique. Here we present a high-performance parallel 3D computing framework for executing large multiscale studies that couple an atomic domain, modeled using molecular dynamics and a continuum domain, modeled using explicit finite elements. We use the robust Coupled Atomistic/Discrete-Dislocation (CADD) displacement-coupling method, but without the transfer of dislocations between atoms and continuum. The main purpose of the work is to provide a multiscale implementation within an existing large-scale parallel molecular dynamics code (LAMMPS) that enables use of all the tools associated with this popular open-source code, while extending CADD-type coupling to 3D. Validation of the implementation includes the demonstration of (i) stability in finite-temperature dynamics using Langevin dynamics, (ii) elimination of wave reflections due to large dynamic events occurring in the MD region and (iii) the absence of spurious forces acting on dislocations due to the MD/FE coupling, for dislocations further than 10 Å from the coupling boundary. A first non-trivial example application of dislocation glide and bowing around obstacles is shown, for dislocation lengths of ∼50 nm using fewer than 1 000 000 atoms but reproducing results of extremely large atomistic simulations at much lower computational cost.

  3. It's the Heat AND the Humidity -- Assessment of Extreme Heat Scenarios to Enable the Assessment of Climate Impacts on Public Health

    NASA Technical Reports Server (NTRS)

    Crosson, William L; Al-Hamdan, Mohammad Z.; Economou, Sigrid, A.; Estes, Maurice G.; Estes, Sue M.; Puckett, Mark; Quattrochi, Dale A

    2013-01-01

    In the United States, extreme heat is the most deadly weather-related hazard. In the face of a warming climate and urbanization, which contributes to local-scale urban heat islands, it is very likely that extreme heat events (EHEs) will become more common and more severe in the U.S. In a NASA-funded project supporting the National Climate Assessment, we are providing historical and future measures of extreme heat to enable assessments of the impacts of heat on public health over the coterminous U.S. We use atmospheric temperature and humidity information from meteorological reanalysis and from Global Climate Models (GCMs) to provide data on past and future heat events. The project s emphasis is on providing assessments of the magnitude, frequency and geographic distribution of extreme heat in the U.S. to facilitate public health studies. In our approach, long-term climate change is captured with GCM output, and the temporal and spatial characteristics of short-term extremes are represented by the reanalysis data. Two future time horizons, 2040 and 2090, are the focus of future assessments; these are compared to the recent past period of 1981-2000. We are characterizing regional-scale temperature and humidity conditions using GCM output for two climate change scenarios (A2 and A1B) defined in the Special Report on Emissions Scenarios (SRES). For each future period, 20 years of multi-model GCM output have been analyzed to develop a heat stress climatology based on statistics of extreme heat indicators. Differences between the two future and past periods have been used to define temperature and humidity changes on a monthly time scale and regional spatial scale. These changes, combined with hourly historical meteorological data at a spatial scale (12 km) much finer than that of GCMs, enable us to create future climate realizations, from which we compute the daily heat stress measures and related spatially-specific climatological fields. These include the mean annual number of days above certain thresholds of maximum and minimum air temperatures, heat indices and a new heat stress variable that gives an integrated measure of heat stress (and relief) over the course of a day. Comparisons are made between projected (2040 and 2090) and past (1990) heat stress statistics. All output is being provided at the 12 km spatial scale and will also be aggregated to the county level, which is a popular scale of analysis for public health researchers. County-level statistics will be made available by our collaborators at the Centers for Disease Control and Prevention (CDC) via the Wide-ranging Online Data for Epidemiologic Research (WONDER) system. CDC WONDER makes the information resources of the CDC available to public health professionals and the general public. This addition of heat stress measures to CDC WONDER will allow decision and policy makers to assess the impact of alternative approaches to optimize the public health response to EHEs. It will also allow public health researchers and policy makers to better include such heat stress measures in the context of national health data available in the CDC WONDER system. The users will be able to spatially and temporally query public health and heat-related data sets and create county-level maps and statistical charts of such data across the coterminous U.S.

  4. bigSCale: an analytical framework for big-scale single-cell data.

    PubMed

    Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger

    2018-06-01

    Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.

  5. The Generation of a Stochastic Flood Event Catalogue for Continental USA

    NASA Astrophysics Data System (ADS)

    Quinn, N.; Wing, O.; Smith, A.; Sampson, C. C.; Neal, J. C.; Bates, P. D.

    2017-12-01

    Recent advances in the acquisition of spatiotemporal environmental data and improvements in computational capabilities has enabled the generation of large scale, even global, flood hazard layers which serve as a critical decision-making tool for a range of end users. However, these datasets are designed to indicate only the probability and depth of inundation at a given location and are unable to describe the likelihood of concurrent flooding across multiple sites.Recent research has highlighted that although the estimation of large, widespread flood events is of great value to flood mitigation and insurance industries, to date it has been difficult to deal with this spatial dependence structure in flood risk over relatively large scales. Many existing approaches have been restricted to empirical estimates of risk based on historic events, limiting their capability of assessing risk over the full range of plausible scenarios. Therefore, this research utilises a recently developed model-based approach to describe the multisite joint distribution of extreme river flows across continental USA river gauges. Given an extreme event at a site, the model characterises the likelihood neighbouring sites are also impacted. This information is used to simulate an ensemble of plausible synthetic extreme event footprints from which flood depths are extracted from an existing global flood hazard catalogue. Expected economic losses are then estimated by overlaying flood depths with national datasets defining asset locations, characteristics and depth damage functions. The ability of this approach to quantify probabilistic economic risk and rare threshold exceeding events is expected to be of value to those interested in the flood mitigation and insurance sectors.This work describes the methodological steps taken to create the flood loss catalogue over a national scale; highlights the uncertainty in the expected annual economic vulnerability within the USA from extreme river flows; and presents future developments to the modelling approach.

  6. Effects of Computer-Aided Interlimb Force Coupling Training on Paretic Hand and Arm Motor Control following Chronic Stroke: A Randomized Controlled Trial

    PubMed Central

    Lin, Chueh-Ho; Chou, Li-Wei; Luo, Hong-Ji; Tsai, Po-Yi; Lieu, Fu-Kong; Chiang, Shang-Lin; Sung, Wen-Hsu

    2015-01-01

    Objective We investigated the training effects of interlimb force coupling training on paretic upper extremity outcomes in patients with chronic stroke and analyzed the relationship between motor recovery of the paretic hand, arm and functional performances on paretic upper limb. Design A randomized controlled trial with outcome assessment at baseline and after 4 weeks of intervention. Setting Taipei Veterans General Hospital, National Yang-Ming University. Participants Thirty-three subjects with chronic stroke were recruited and randomly assigned to training (n = 16) and control groups (n = 17). Interventions The computer-aided interlimb force coupling training task with visual feedback included different grip force generation methods on both hands. Main Outcome Measures The Barthel Index (BI), the upper extremity motor control Fugl-Meyer Assessment (FMA-UE), the Motor Assessment Score (MAS), and the Wolf Motor Function Test (WMFT). All assessments were executed by a blinded evaluator, and data management and statistical analysis were also conducted by a blinded researcher. Results The training group demonstrated greater improvement on the FMA-UE (p<.001), WMFT (p<.001), MAS (p = .004) and BI (p = .037) than the control group after 4 weeks of intervention. In addition, a moderate correlation was found between the improvement of scores for hand scales of the FMA and other portions of the FMA UE (r = .528, p = .018) or MAS (r = .596, p = .015) in the training group. Conclusion Computer-aided interlimb force coupling training improves the motor recovery of a paretic hand, and facilitates motor control and enhances functional performance in the paretic upper extremity of people with chronic stroke. Trial Registration ClinicalTrials.gov NCT02247674. PMID:26193492

  7. A computer graphics display and data compression technique

    NASA Technical Reports Server (NTRS)

    Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)

    1974-01-01

    The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.

  8. Pynamic: the Python Dynamic Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, G L; Ahn, D H; de Supinksi, B R

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, wemore » present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.« less

  9. Large-Scale Meteorological Patterns Associated with Extreme Precipitation in the US Northeast

    NASA Astrophysics Data System (ADS)

    Agel, L. A.; Barlow, M. A.

    2016-12-01

    Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. Tropopause height provides a compact representation of large-scale circulation patterns, as it is linked to mid-level circulation, low-level thermal contrasts and low-level diabatic heating. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into a larger context. Six tropopause patterns are identified on extreme days: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong upward motion during, and moisture transport preceding, extreme precipitation events.

  10. Exploring Asynchronous Many-Task Runtime Systems toward Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knight, Samuel; Baker, Gavin Matthew; Gamell, Marc

    2015-10-01

    Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leadingmore » candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.« less

  11. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  12. The Microphysical Structure of Extreme Precipitation as Inferred from Ground-Based Raindrop Spectra.

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, Remko; Smith, James A.; Steiner, Matthias

    2003-05-01

    The controls on the variability of raindrop size distributions in extreme rainfall and the associated radar reflectivity-rain rate relationships are studied using a scaling-law formalism for the description of raindrop size distributions and their properties. This scaling-law formalism enables a separation of the effects of changes in the scale of the raindrop size distribution from those in its shape. Parameters controlling the scale and shape of the scaled raindrop size distribution may be related to the microphysical processes generating extreme rainfall. A global scaling analysis of raindrop size distributions corresponding to rain rates exceeding 100 mm h1, collected during the 1950s with the Illinois State Water Survey raindrop camera in Miami, Florida, reveals that extreme rain rates tend to be associated with conditions in which the variability of the raindrop size distribution is strongly number controlled (i.e., characteristic drop sizes are roughly constant). This means that changes in properties of raindrop size distributions in extreme rainfall are largely produced by varying raindrop concentrations. As a result, rainfall integral variables (such as radar reflectivity and rain rate) are roughly proportional to each other, which is consistent with the concept of the so-called equilibrium raindrop size distribution and has profound implications for radar measurement of extreme rainfall. A time series analysis for two contrasting extreme rainfall events supports the hypothesis that the variability of raindrop size distributions for extreme rain rates is strongly number controlled. However, this analysis also reveals that the actual shapes of the (measured and scaled) spectra may differ significantly from storm to storm. This implies that the exponents of power-law radar reflectivity-rain rate relationships may be similar, and close to unity, for different extreme rainfall events, but their prefactors may differ substantially. Consequently, there is no unique radar reflectivity-rain rate relationship for extreme rain rates, but the variability is essentially reduced to one free parameter (i.e., the prefactor). It is suggested that this free parameter may be estimated on the basis of differential reflectivity measurements in extreme rainfall.

  13. Fast neural network surrogates for very high dimensional physics-based models in computational oceanography.

    PubMed

    van der Merwe, Rudolph; Leen, Todd K; Lu, Zhengdong; Frolov, Sergey; Baptista, Antonio M

    2007-05-01

    We present neural network surrogates that provide extremely fast and accurate emulation of a large-scale circulation model for the coupled Columbia River, its estuary and near ocean regions. The circulation model has O(10(7)) degrees of freedom, is highly nonlinear and is driven by ocean, atmospheric and river influences at its boundaries. The surrogates provide accurate emulation of the full circulation code and run over 1000 times faster. Such fast dynamic surrogates will enable significant advances in ensemble forecasts in oceanography and weather.

  14. ATHLETE: Trading Complexity for Mass in Roving Vehicles

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.

    2013-01-01

    This paper describes a scaling analysis of ATHLETE for exploration of the moon, Mars and Near-Earth Asteroids (NEAs) in comparison to a more conventional vehicle configuration. Recently, the focus of human exploration beyond LEO has been on NEAs. A low gravity testbed has been constructed in the ATHLETE lab, with six computer-controlled winches able to lift ATHLETE and payloads so as to simulate the motion of the system in the vicinity of a NEA or to simulate ATHLETE on extreme terrain in lunar or Mars gravity. Test results from this system are described.

  15. Representation of Precipitation in a Decade-long Continental-Scale Convection-Resolving Climate Simulation

    NASA Astrophysics Data System (ADS)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2017-12-01

    The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Regional climate simulations using horizontal resolutions of O(1km) allow to explicitly resolve deep convection leading to an improved representation of the water cycle. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. A new version of the Consortium for Small-Scale Modeling weather and climate model (COSMO) is capable of exploiting new supercomputer architectures employing GPU accelerators, and allows convection-resolving climate simulations on computational domains spanning continents and time periods up to one decade. We present results from a decade-long, convection-resolving climate simulation on a European-scale computational domain. The simulation has a grid spacing of 2.2 km, 1536x1536x60 grid points, covers the period 1999-2008, and is driven by the ERA-Interim reanalysis. Specifically we present an evaluation of hourly rainfall using a wide range of data sets, including several rain-gauge networks and a remotely-sensed lightning data set. Substantial improvements are found in terms of the diurnal cycles of precipitation amount, wet-hour frequency and all-hour 99th percentile. However the results also reveal substantial differences between regions with and without strong orographic forcing. Furthermore we present an index for deep-convective activity based on the statistics of vertical motion. Comparison of the index with lightning data shows that the convection-resolving climate simulations are able to reproduce important features of the annual cycle of deep convection in Europe. Leutwyler D., D. Lüthi, N. Ban, O. Fuhrer, and C. Schär (2017): Evaluation of the Convection-Resolving Climate Modeling Approach on Continental Scales , J. Geophys. Res. Atmos., 122, doi:10.1002/2016JD026013.

  16. A dynamical systems approach to studying midlatitude weather extremes

    NASA Astrophysics Data System (ADS)

    Messori, Gabriele; Caballero, Rodrigo; Faranda, Davide

    2017-04-01

    Extreme weather occurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. The ability to predict these events is therefore a topic of crucial importance. Here we propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We show that simple dynamical systems metrics can be used to identify sets of large-scale atmospheric flow patterns with similar spatial structure and temporal evolution on time scales of several days to a week. In regions where these patterns favor extreme weather, they afford a particularly good predictability of the extremes. We specifically test this technique on the atmospheric circulation in the North Atlantic region, where it provides predictability of large-scale wintertime surface temperature extremes in Europe up to 1 week in advance.

  17. Methods of photoelectrode characterization with high spatial and temporal resolution

    DOE PAGES

    Esposito, Daniel V.; Baxter, Jason B.; John, Jimmy; ...

    2015-06-19

    Here, materials and photoelectrode architectures that are highly efficient, extremely stable, and made from low cost materials are required for commercially viable photoelectrochemical (PEC) water-splitting technology. A key challenge is the heterogeneous nature of real-world materials, which often possess spatial variation in their crystal structure, morphology, and/or composition at the nano-, micro-, or macro-scale. Different structures and compositions can have vastly different properties and can therefore strongly influence the overall performance of the photoelectrode through complex structure–property relationships. A complete understanding of photoelectrode materials would also involve elucidation of processes such as carrier collection and electrochemical charge transfer that occurmore » at very fast time scales. We present herein an overview of a broad suite of experimental and computational tools that can be used to define the structure–property relationships of photoelectrode materials at small dimensions and on fast time scales. A major focus is on in situ scanning-probe measurement (SPM) techniques that possess the ability to measure differences in optical, electronic, catalytic, and physical properties with nano- or micro-scale spatial resolution. In situ ultrafast spectroscopic techniques, used to probe carrier dynamics involved with processes such as carrier generation, recombination, and interfacial charge transport, are also discussed. Complementing all of these experimental techniques are computational atomistic modeling tools, which can be invaluable for interpreting experimental results, aiding in materials discovery, and interrogating PEC processes at length and time scales not currently accessible by experiment. In addition to reviewing the basic capabilities of these experimental and computational techniques, we highlight key opportunities and limitations of applying these tools for the development of PEC materials.« less

  18. Advances and Challenges In Uncertainty Quantification with Application to Climate Prediction, ICF design and Science Stockpile Stewardship

    NASA Astrophysics Data System (ADS)

    Klein, R.; Woodward, C. S.; Johannesson, G.; Domyancic, D.; Covey, C. C.; Lucas, D. D.

    2012-12-01

    Uncertainty Quantification (UQ) is a critical field within 21st century simulation science that resides at the very center of the web of emerging predictive capabilities. The science of UQ holds the promise of giving much greater meaning to the results of complex large-scale simulations, allowing for quantifying and bounding uncertainties. This powerful capability will yield new insights into scientific predictions (e.g. Climate) of great impact on both national and international arenas, allow informed decisions on the design of critical experiments (e.g. ICF capsule design, MFE, NE) in many scientific fields, and assign confidence bounds to scientifically predictable outcomes (e.g. nuclear weapons design). In this talk I will discuss a major new strategic initiative (SI) we have developed at Lawrence Livermore National Laboratory to advance the science of Uncertainty Quantification at LLNL focusing in particular on (a) the research and development of new algorithms and methodologies of UQ as applied to multi-physics multi-scale codes, (b) incorporation of these advancements into a global UQ Pipeline (i.e. a computational superstructure) that will simplify user access to sophisticated tools for UQ studies as well as act as a self-guided, self-adapting UQ engine for UQ studies on extreme computing platforms and (c) use laboratory applications as a test bed for new algorithms and methodologies. The initial SI focus has been on applications for the quantification of uncertainty associated with Climate prediction, but the validated UQ methodologies we have developed are now being fed back into Science Based Stockpile Stewardship (SSS) and ICF UQ efforts. To make advancements in several of these UQ grand challenges, I will focus in talk on the following three research areas in our Strategic Initiative: Error Estimation in multi-physics and multi-scale codes ; Tackling the "Curse of High Dimensionality"; and development of an advanced UQ Computational Pipeline to enable complete UQ workflow and analysis for ensemble runs at the extreme scale (e.g. exascale) with self-guiding adaptation in the UQ Pipeline engine. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  19. Effects of interactive visual feedback training on post-stroke pusher syndrome: a pilot randomized controlled study.

    PubMed

    Yang, Yea-Ru; Chen, Yi-Hua; Chang, Heng-Chih; Chan, Rai-Chi; Wei, Shun-Hwa; Wang, Ray-Yau

    2015-10-01

    We investigated the effects of a computer-generated interactive visual feedback training program on the recovery from pusher syndrome in stroke patients. Assessor-blinded, pilot randomized controlled study. A total of 12 stroke patients with pusher syndrome were randomly assigned to either the experimental group (N = 7, computer-generated interactive visual feedback training) or control group (N = 5, mirror visual feedback training). The scale for contraversive pushing for severity of pusher syndrome, the Berg Balance Scale for balance performance, and the Fugl-Meyer assessment scale for motor control were the outcome measures. Patients were assessed pre- and posttraining. A comparison of pre- and posttraining assessment results revealed that both training programs led to the following significant changes: decreased severity of pusher syndrome scores (decreases of 4.0 ± 1.1 and 1.4 ± 1.0 in the experimental and control groups, respectively); improved balance scores (increases of 14.7 ± 4.3 and 7.2 ± 1.6 in the experimental and control groups, respectively); and higher scores for lower extremity motor control (increases of 8.4 ± 2.2 and 5.6 ± 3.3 in the experimental and control groups, respectively). Furthermore, the computer-generated interactive visual feedback training program produced significantly better outcomes in the improvement of pusher syndrome (p < 0.01) and balance (p < 0.05) compared with the mirror visual feedback training program. Although both training programs were beneficial, the computer-generated interactive visual feedback training program more effectively aided recovery from pusher syndrome compared with mirror visual feedback training. © The Author(s) 2014.

  20. Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Zhou, Ye

    1999-01-01

    Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.

  1. Fast computation of an optimal controller for large-scale adaptive optics.

    PubMed

    Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc

    2011-11-01

    The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.

  2. [Multi-temporal scale analysis of impacts of extreme high temperature on net carbon uptake in subtropical coniferous plantation.

    PubMed

    Zhang, Mi; Wen, Xue Fa; Zhang, Lei Ming; Wang, Hui Min; Guo, Yi Wen; Yu, Gui Rui

    2018-02-01

    Extreme high temperature is one of important extreme weathers that impact forest ecosystem carbon cycle. In this study, applying CO 2 flux and routine meteorological data measured during 2003-2012, we examined the impacts of extreme high temperature and extreme high temperature event on net carbon uptake of subtropical coniferous plantation in Qianyanzhou. Combining with wavelet analysis, we analyzed environmental controls on net carbon uptake at different temporal scales, when the extreme high temperature and extreme high temperature event happened. The results showed that mean daily cumulative NEE decreased by 51% in the days with daily maximum air temperature range between 35 ℃ and 40 ℃, compared with that in the days with the range between 30 ℃ and 34 ℃. The effects of the extreme high temperature and extreme high temperature event on monthly NEE and annual NEE related to the strength and duration of extreme high tempe-rature event. In 2003, when strong extreme high temperature event happened, the sum of monthly cumulative NEE in July and August was only -11.64 g C·m -2 ·(2 month) -1 . The value decreased by 90%, compared with multi-year average value. At the same time, the relative variation of annual NEE reached -6.7%. In July and August, when the extreme high temperature and extreme high temperature event occurred, air temperature (T a ) and vapor press deficit (VPD) were the dominant controller for the daily variation of NEE. The coherency between NEE T a and NEE VPD was 0.97 and 0.95, respectively. At 8-, 16-, and 32-day periods, T a , VPD, soil water content at 5 cm depth (SWC), and precipitation (P) controlled NEE. The coherency between NEE SWC and NEE P was higher than 0.8 at monthly scale. The results indicated that atmospheric water deficit impacted NEE at short temporal scale, when the extreme high temperature and extreme high temperature event occurred, both of atmospheric water deficit and soil drought stress impacted NEE at long temporal scales in this ecosystem.

  3. High-resolution stochastic generation of extreme rainfall intensity for urban drainage modelling applications

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2016-04-01

    Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.

  4. Prediction of Rare Transitions in Planetary Atmosphere Dynamics Between Attractors with Different Number of Zonal Jets

    NASA Astrophysics Data System (ADS)

    Bouchet, F.; Laurie, J.; Zaboronski, O.

    2012-12-01

    We describe transitions between attractors with either one, two or more zonal jets in models of turbulent atmosphere dynamics. Those transitions are extremely rare, and occur over times scales of centuries or millennia. They are extremely hard to observe in direct numerical simulations, because they require on one hand an extremely good resolution in order to simulate accurately the turbulence and on the other hand simulations performed over an extremely long time. Those conditions are usually not met together in any realistic models. However many examples of transitions between turbulent attractors in geophysical flows are known to exist (paths of the Kuroshio, Earth's magnetic field reversal, atmospheric flows, and so on). Their study through numerical computations is inaccessible using conventional means. We present an alternative approach, based on instanton theory and large deviations. Instanton theory provides a way to compute (both numerically and theoretically) extremely rare transitions between turbulent attractors. This tool, developed in field theory, and justified in some cases through the large deviation theory in mathematics, can be applied to models of turbulent atmosphere dynamics. It provides both new theoretical insights and new type of numerical algorithms. Those algorithms can predict transition histories and transition rates using numerical simulations run over only hundreds of typical model dynamical time, which is several order of magnitude lower than the typical transition time. We illustrate the power of those tools in the framework of quasi-geostrophic models. We show regimes where two or more attractors coexist. Those attractors corresponds to turbulent flows dominated by either one or more zonal jets similar to midlatitude atmosphere jets. Among the trajectories connecting two non-equilibrium attractors, we determine the most probable ones. Moreover, we also determine the transition rates, which are several of magnitude larger than a typical time determined from the jet structure. We discuss the medium-term generalization of those results to models with more complexity, like primitive equations or GCMs.

  5. Effects of Action Observational Training Plus Brain-Computer Interface-Based Functional Electrical Stimulation on Paretic Arm Motor Recovery in Patient with Stroke: A Randomized Controlled Trial.

    PubMed

    Kim, TaeHoon; Kim, SeongSik; Lee, ByoungHee

    2016-03-01

    The purpose of this study was to investigate whether action observational training (AOT) plus brain-computer interface-based functional electrical stimulation (BCI-FES) has a positive influence on motor recovery of paretic upper extremity in patients with stroke. This was a hospital-based, randomized controlled trial with a blinded assessor. Thirty patients with a first-time stroke were randomly allocated to one of two groups: the BCI-FES group (n = 15) and the control group (n = 15). The BCI-FES group administered to AOT plus BCI-FES on the paretic upper extremity five times per week during 4 weeks while both groups received conventional therapy. The primary outcomes were the Fugl-Meyer Assessment of the Upper Extremity, Motor Activity Log (MAL), Modified Barthel Index and range of motion of paretic arm. A blinded assessor evaluated the outcomes at baseline and 4 weeks. All baseline outcomes did not differ significantly between the two groups. After 4 weeks, the Fugl-Meyer Assessment of the Upper Extremity sub-items (total, shoulder and wrist), MAL (MAL-Activity of Use and Quality of Movement), Modified Barthel Index and wrist flexion range of motion were significantly higher in the BCI-FES group (p < 0.05). AOT plus BCI-based FES is effective in paretic arm rehabilitation by improving the upper extremity performance. The motor improvements suggest that AOT plus BCI-based FES can be used as a therapeutic tool for stroke rehabilitation. The limitations of the study are that subjects had a certain limited level of upper arm function, and the sample size was comparatively small; hence, it is recommended that future large-scale trials should consider stratified and lager populations according to upper arm function. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Force Field Accelerated Density Functional Theory Molecular Dynamics for Simulation of Reactive Systems at Extreme Conditions

    NASA Astrophysics Data System (ADS)

    Lindsey, Rebecca; Goldman, Nir; Fried, Laurence

    Understanding chemistry at extreme conditions is crucial in fields including geochemistry, astrobiology, and alternative energy. First principles methods can provide valuable microscopic insights into such systems while circumventing the risks of physical experiments, however the time and length scales associated with chemistry at extreme conditions (ns and μm, respectively) largely preclude extension of such models to molecular dynamics. In this work, we develop a simulation approach that retains the accuracy of density functional theory (DFT) while decreasing computational effort by several orders of magnitude. We generate n-body descriptions for atomic interactions by mapping forces arising from short density functional theory (DFT) trajectories on to simple Chebyshev polynomial series. We examine the importance of including greater than 2-body interactions, model transferability to different state points, and discuss approaches to ensure smooth and reasonable model shape outside of the distance domain sampled by the DFT training set. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. A Decade-long Continental-Scale Convection-Resolving Climate Simulation on GPUs

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph

    2016-04-01

    The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. Using horizontal grid spacings of O(1km), they allow to explicitly resolve deep convection leading to an improved representation of the water cycle. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer-designs that involve conventional multicore CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation using the GPU-enabled COSMO version. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss the performance of the convection-resolving modeling approach on the European scale. Specifically we focus on the annual cycle of convection in Europe, on the organization of convective clouds and on the verification of hourly rainfall with various high resolution datasets.

  8. Evaluating 20th Century precipitation characteristics between multi-scale atmospheric models with different land-atmosphere coupling

    NASA Astrophysics Data System (ADS)

    Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.

    2016-12-01

    Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.

  9. The origins of multifractality in financial time series and the effect of extreme events

    NASA Astrophysics Data System (ADS)

    Green, Elena; Hanan, William; Heffernan, Daniel

    2014-06-01

    This paper presents the results of multifractal testing of two sets of financial data: daily data of the Dow Jones Industrial Average (DJIA) index and minutely data of the Euro Stoxx 50 index. Where multifractal scaling is found, the spectrum of scaling exponents is calculated via Multifractal Detrended Fluctuation Analysis. In both cases, further investigations reveal that the temporal correlations in the data are a more significant source of the multifractal scaling than are the distributions of the returns. It is also shown that the extreme events which make up the heavy tails of the distribution of the Euro Stoxx 50 log returns distort the scaling in the data set. The most extreme events are inimical to the scaling regime. This result is in contrast to previous findings that extreme events contribute to multifractality.

  10. Discrete-continuum multiscale model for transport, biomass development and solid restructuring in porous media

    NASA Astrophysics Data System (ADS)

    Ray, Nadja; Rupp, Andreas; Prechtel, Alexander

    2017-09-01

    Upscaling transport in porous media including both biomass development and simultaneous structural changes in the solid matrix is extremely challenging. This is because both affect the medium's porosity as well as mass transport parameters and flow paths. We address this challenge by means of a multiscale model. At the pore scale, the local discontinuous Galerkin (LDG) method is used to solve differential equations describing particularly the bacteria's and the nutrient's development. Likewise, a sticky agent tightening together solid or bio cells is considered. This is combined with a cellular automaton method (CAM) capturing structural changes of the underlying computational domain stemming from biomass development and solid restructuring. Findings from standard homogenization theory are applied to determine the medium's characteristic time- and space-dependent properties. Investigating these results enhances our understanding of the strong interplay between a medium's functional properties and its geometric structure. Finally, integrating such properties as model parameters into models defined on a larger scale enables reflecting the impact of pore scale processes on the larger scale.

  11. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  12. TLEM 2.0 - a comprehensive musculoskeletal geometry dataset for subject-specific modeling of lower extremity.

    PubMed

    Carbone, V; Fluit, R; Pellikaan, P; van der Krogt, M M; Janssen, D; Damsgaard, M; Vigneron, L; Feilkas, T; Koopman, H F J M; Verdonschot, N

    2015-03-18

    When analyzing complex biomechanical problems such as predicting the effects of orthopedic surgery, subject-specific musculoskeletal models are essential to achieve reliable predictions. The aim of this paper is to present the Twente Lower Extremity Model 2.0, a new comprehensive dataset of the musculoskeletal geometry of the lower extremity, which is based on medical imaging data and dissection performed on the right lower extremity of a fresh male cadaver. Bone, muscle and subcutaneous fat (including skin) volumes were segmented from computed tomography and magnetic resonance images scans. Inertial parameters were estimated from the image-based segmented volumes. A complete cadaver dissection was performed, in which bony landmarks, attachments sites and lines-of-action of 55 muscle actuators and 12 ligaments, bony wrapping surfaces, and joint geometry were measured. The obtained musculoskeletal geometry dataset was finally implemented in the AnyBody Modeling System (AnyBody Technology A/S, Aalborg, Denmark), resulting in a model consisting of 12 segments, 11 joints and 21 degrees of freedom, and including 166 muscle-tendon elements for each leg. The new TLEM 2.0 dataset was purposely built to be easily combined with novel image-based scaling techniques, such as bone surface morphing, muscle volume registration and muscle-tendon path identification, in order to obtain subject-specific musculoskeletal models in a quick and accurate way. The complete dataset, including CT and MRI scans and segmented volume and surfaces, is made available at http://www.utwente.nl/ctw/bw/research/projects/TLEMsafe for the biomechanical community, in order to accelerate the development and adoption of subject-specific models on large scale. TLEM 2.0 is freely shared for non-commercial use only, under acceptance of the TLEMsafe Research License Agreement. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. weather@home 2: validation of an improved global-regional climate modelling system

    NASA Astrophysics Data System (ADS)

    Guillod, Benoit P.; Jones, Richard G.; Bowery, Andy; Haustein, Karsten; Massey, Neil R.; Mitchell, Daniel M.; Otto, Friederike E. L.; Sparrow, Sarah N.; Uhe, Peter; Wallom, David C. H.; Wilson, Simon; Allen, Myles R.

    2017-05-01

    Extreme weather events can have large impacts on society and, in many regions, are expected to change in frequency and intensity with climate change. Owing to the relatively short observational record, climate models are useful tools as they allow for generation of a larger sample of extreme events, to attribute recent events to anthropogenic climate change, and to project changes in such events into the future. The modelling system known as weather@home, consisting of a global climate model (GCM) with a nested regional climate model (RCM) and driven by sea surface temperatures, allows one to generate a very large ensemble with the help of volunteer distributed computing. This is a key tool to understanding many aspects of extreme events. Here, a new version of the weather@home system (weather@home 2) with a higher-resolution RCM over Europe is documented and a broad validation of the climate is performed. The new model includes a more recent land-surface scheme in both GCM and RCM, where subgrid-scale land-surface heterogeneity is newly represented using tiles, and an increase in RCM resolution from 50 to 25 km. The GCM performs similarly to the previous version, with some improvements in the representation of mean climate. The European RCM temperature biases are overall reduced, in particular the warm bias over eastern Europe, but large biases remain. Precipitation is improved over the Alps in summer, with mixed changes in other regions and seasons. The model is shown to represent the main classes of regional extreme events reasonably well and shows a good sensitivity to its drivers. In particular, given the improvements in this version of the weather@home system, it is likely that more reliable statements can be made with regards to impact statements, especially at more localized scales.

  14. The Peak Structure and Future Changes of the Relationships Between Extreme Precipitation and Temperature

    NASA Technical Reports Server (NTRS)

    Wang, Guiling; Wang, Dagang; Trenberth, Kevin E.; Erfanian, Amir; Yu, Miao; Bosilovich, Michael G.; Parr, Dana T.

    2017-01-01

    Theoretical models predict that, in the absence of moisture limitation, extreme precipitation intensity could exponentially increase with temperatures at a rate determined by the Clausius-Clapeyron (C-C) relationship. Climate models project a continuous increase of precipitation extremes for the twenty-first century over most of the globe. However, some station observations suggest a negative scaling of extreme precipitation with very high temperatures, raising doubts about future increase of precipitation extremes. Here we show for the present-day climate over most of the globe,the curve relating daily precipitation extremes with local temperatures has a peak structure, increasing as expected at the low medium range of temperature variations but decreasing at high temperatures. However, this peak-shaped relationship does not imply a potential upper limit for future precipitation extremes. Climate models project both the peak of extreme precipitation and the temperature at which it peaks (T(sub peak)) will increase with warming; the two increases generally conform to the C-C scaling rate in mid- and high-latitudes,and to a super C-C scaling in most of the tropics. Because projected increases of local mean temperature (T(sub mean)) far exceed projected increases of T(sub peak) over land, the conventional approach of relating extreme precipitation to T(sub mean) produces a misleading sub-C-C scaling rate.

  15. Changes in intensity of precipitation extremes in Romania on very hight temporal scale and implications on the validity of the Clausius-Clapeyron relation

    NASA Astrophysics Data System (ADS)

    Busuioc, Aristita; Baciu, Madalina; Breza, Traian; Dumitrescu, Alexandru; Stoica, Cerasela; Baghina, Nina

    2016-04-01

    Many observational, theoretical and based on climate model simulation studies suggested that warmer climates lead to more intense precipitation events, even when the total annual precipitation is slightly reduced. In this way, it was suggested that extreme precipitation events may increase at Clausius-Clapeyron (CC) rate under global warming and constraint of constant relative humidity. However, recent studies show that the relationship between extreme rainfall intensity and atmospheric temperature is much more complex than would be suggested by the CC relationship and is mainly dependent on precipitation temporal resolution, region, storm type and whether the analysis is conducted on storm events rather than fixed data. The present study presents the dependence between the very hight temporal scale extreme rainfall intensity and daily temperatures, with respect to the verification of the CC relation. To solve this objective, the analysis is conducted on rainfall event rather than fixed interval using the rainfall data based on graphic records including intensities (mm/min.) calculated over each interval with permanent intensity per minute. The annual interval with available a such data (April to October) is considered at 5 stations over the interval 1950-2007. For Bucuresti-Filaret station the analysis is extended over the longer interval (1898-2007). For each rainfall event, the maximum intensity (mm/min.) is retained and these time series are considered for the further analysis (abbreviated in the following as IMAX). The IMAX data were divided based on the daily mean temperature into bins 2oC - wide. The bins with less than 100 values were excluded. The 90th, 99th and 99.9th percentiles were computed from the binned data using the empirical distribution and their variability has been compared to the CC scaling (e.g. exponential relation given by a 7% increase per temperature degree rise). The results show a dependence close to double the CC relation for temperatures less than ~ 220C and negative scaling rates for higher temperatures. This behaviour is similar for all the 5 analysed stations over the common interval 1950-2007. This scaling is more exactly for the 90th percentile, while for the higher percentiles the rainfall intensity in response to warming exceeds sometimes the CC rate. For Bucuresti-Filaret station, the results are similar over a longer interval (1898-2007) showing that these findings are robust. Similar techniques has been previously applied to the hourly rainfall intensities recorded at 9 stations (including the 5 ones) and the results are slightly different: the 90th percentile shows dependence close to the CC relation for all temperatures; the 99th and 99.9th percentiles exhibit rates close to double the CC rate for temperatures between ~ 100C and ~ 220C and negative scaling rates for higher temperatures. In conclusion, these results show that the dependence between the extreme precipitation intensity and atmospheric temperature in Romania is mainly dependent on the temporal precipitation resolution and the degree of the extreme precipitation event (moderate or stronger); these findings are mainly in agreenment with the conclusions presented by previous international studies (mentioned above), with some regional specific features, showing the importance of the regional studies. The results presented is this study were funded by the Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI) through the research project CLIMHYDEX, "Changes in climate extremes and associated impact in hydrological events in Romania", code PNII-ID-2011-2-0073 (http://climhydex.meteoromania.ro).

  16. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  17. Modeling and Simulating Multiple Failure Masking enabled by Local Recovery for Stencil-based Applications at Extreme Scales

    DOE PAGES

    Gamell, Marc; Teranishi, Keita; Mayo, Jackson; ...

    2017-04-24

    By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less

  18. Modeling and Simulating Multiple Failure Masking enabled by Local Recovery for Stencil-based Applications at Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamell, Marc; Teranishi, Keita; Mayo, Jackson

    By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less

  19. A centennial tribute to G.K. Gilbert's Hydraulic Mining Débris in the Sierra Nevada

    NASA Astrophysics Data System (ADS)

    James, L. A.; Phillips, J. D.; Lecce, S. A.

    2017-10-01

    G.K. Gilbert's (1917) classic monograph, Hydraulic-Mining Débris in the Sierra Nevada, is described and put into the context of modern geomorphic knowledge. The emphasis here is on large-scale applied fluvial geomorphology, but other key elements-e.g., coastal geomorphology-are also briefly covered. A brief synopsis outlines key elements of the monograph, followed by discussions of highly influential aspects including the integrated watershed perspective, the extreme example of anthropogenic sedimentation, computation of a quantitative, semidistributed sediment budget, and advent of sediment-wave theory. Although Gilbert did not address concepts of equilibrium and grade in much detail, the rivers of the northwestern Sierra Nevada were highly disrupted and thrown into a condition of nonequilibrium. Therefore, concepts of equilibrium and grade-for which Gilbert's early work is often cited-are discussed. Gilbert's work is put into the context of complex nonlinear dynamics in geomorphic systems and how these concepts can be used to interpret the nonequilibrium systems described by Gilbert. Broad, basin-scale studies were common in the period, but few were as quantitative and empirically rigorous or employed such a range of methodologies as PP105. None demonstrated such an extreme case of anthropogeomorphic change.

  20. A pilot study of contralateral homonymous muscle activity simulated electrical stimulation in chronic hemiplegia.

    PubMed

    Osu, Rieko; Otaka, Yohei; Ushiba, Junichi; Sakata, Sachiko; Yamaguchi, Tomofumi; Fujiwara, Toshiyuki; Kondo, Kunitsugu; Liu, Meigen

    2012-01-01

    For the recovery of hemiparetic hand function, a therapy was developed called contralateral homonymous muscle activity stimulated electrical stimulation (CHASE), which combines electrical stimulation and bilateral movements, and its feasibility was studied in three chronic stroke patients with severe hand hemiparesis. Patients with a subcortical lesion were asked to extend their wrist and fingers bilaterally while an electromyogram (EMG) was recorded from the extensor carpi radialis (ECR) muscle in the unaffected hand. Electric stimulation was applied to the homonymous wrist and finger extensors of the affected side. The intensity of the electrical stimulation was computed based on the EMG and scaled so that the movements of the paretic hand looked similar to those of the unaffected side. The patients received 30-minutes of therapy per day for 2 weeks. Improvement in the active range of motion of wrist extension was observed for all patients. There was a decrease in the scores of modified Ashworth scale in the flexors. Fugl-Meyer assessment scores of motor function of the upper extremities improved in two of the patients. The results suggest a positive outcome can be obtained using the CHASE system for upper extremity rehabilitation of patients with severe hemiplegia.

  1. Observation of gravity waves during the extreme tornado outbreak of 3 April 1974

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Phan, T.; Smith, R. E.

    1978-01-01

    A continuous wave-spectrum high-frequency radiowave Doppler sounder array was used to observe upper-atmospheric disturbances during an extreme tornado outbreak. The observations indicated that gravity waves with two harmonic wave periods were detected at the F-region ionospheric height. Using a group ray path computational technique, the observed gravity waves were traced in order to locate potential sources. The signals were apparently excited 1-3 hours before tornado touchdown. Reverse ray tracing indicated that the wave source was located at the aurora zone with a Kp index of 6 at the time of wave excitation. The summation of the 24-hour Kp index for the day was 36. The results agree with existing theories (Testud, 1970; Titheridge, 1971; Kato, 1976) for the excitation of large-scale traveling ionospheric disturbances associated with geomagnetic activity in the aurora zone.

  2. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  3. Rasch analysis of the Italian Lower Extremity Functional Scale: insights on dimensionality and suggestions for an improved 15-item version.

    PubMed

    Bravini, Elisabetta; Giordano, Andrea; Sartorio, Francesco; Ferriero, Giorgio; Vercelli, Stefano

    2017-04-01

    To investigate dimensionality and the measurement properties of the Italian Lower Extremity Functional Scale using both classical test theory and Rasch analysis methods, and to provide insights for an improved version of the questionnaire. Rasch analysis of individual patient data. Rehabilitation centre. A total of 135 patients with musculoskeletal diseases of the lower limb. Patients were assessed with the Lower Extremity Functional Scale before and after the rehabilitation. Rasch analysis showed some problems related to rating scale category functioning, items fit, and items redundancy. After an iterative process, which resulted in the reduction of rating scale categories from 5 to 4, and in the deletion of 5 items, the psychometric properties of the Italian Lower Extremity Functional Scale improved. The retained 15 items with a 4-level response format fitted the Rasch model (internal construct validity), and demonstrated unidimensionality and good reliability indices (person-separation reliability 0.92; Cronbach's alpha 0.94). Then, the analysis showed differential item functioning for six of the retained items. The sensitivity to change of the Italian 15-item Lower Extremity Functional Scale was nearly equal to the one of the original version (effect size: 0.93 and 0.98; standardized response mean: 1.20 and 1.28, respectively for the 15-item and 20-item versions). The Italian Lower Extremity Functional Scale had unsatisfactory measurement properties. However, removing five items and simplifying the scoring from 5 to 4 levels resulted in a more valid measure with good reliability and sensitivity to change.

  4. Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mustain, Christopher J.

    The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.

  5. Tuning of radar algorithms with disdrometer data during two extremely wet months in the Paris area

    NASA Astrophysics Data System (ADS)

    Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    Radar algorithms convert quantities measured by radars to rain rate, the quantity hydrometerologists are interested in. They basically rely on power law relations between these quantities. This paper focuses on three relations between the horizontal reflectivity (Zh), the differential reflectivity (Zdr), the differential phase shift (Kdp) and the rain rate (R) : Zh-R, R-Kdp and R-Z-Zdr. Data collected during the extremely wet months of May and June 2016 by three disdrometers operated by Ecole des Ponts ParisTech on its campus is used to assess the performance of these respective algorithms. In a first step the temporal variability of the parameters characterizing the radar relations is investigated and quantified. It appears to be significant between events and even within an event. In a second step a methodology relying on checking the ability of a given algorithm to reproduce the very good scale invariant multifractal behaviour (on scales 30 s - few h) observed on rainfall time series is implemented. It is compared with the use of standard scores computed at a single scale as commonly done. We show that a hybrid model (Zh-R relation for low rain rates and R-Kdp for great ones) performs best. In also appears that the more local possible estimates of the parameters should be used in the radar relations.

  6. Scaling of precipitation extremes with temperature in the French Mediterranean region: What explains the hook shape?

    NASA Astrophysics Data System (ADS)

    Drobinski, P.; Alonzo, B.; Bastin, S.; Silva, N. Da; Muller, C.

    2016-04-01

    Expected changes to future extreme precipitation remain a key uncertainty associated with anthropogenic climate change. Extreme precipitation has been proposed to scale with the precipitable water content in the atmosphere. Assuming constant relative humidity, this implies an increase of precipitation extremes at a rate of about 7% °C-1 globally as indicated by the Clausius-Clapeyron relationship. Increases faster and slower than Clausius-Clapeyron have also been reported. In this work, we examine the scaling between precipitation extremes and temperature in the present climate using simulations and measurements from surface weather stations collected in the frame of the HyMeX and MED-CORDEX programs in Southern France. Of particular interest are departures from the Clausius-Clapeyron thermodynamic expectation, their spatial and temporal distribution, and their origin. Looking at the scaling of precipitation extreme with temperature, two regimes emerge which form a hook shape: one at low temperatures (cooler than around 15°C) with rates of increase close to the Clausius-Clapeyron rate and one at high temperatures (warmer than about 15°C) with sub-Clausius-Clapeyron rates and most often negative rates. On average, the region of focus does not seem to exhibit super Clausius-Clapeyron behavior except at some stations, in contrast to earlier studies. Many factors can contribute to departure from Clausius-Clapeyron scaling: time and spatial averaging, choice of scaling temperature (surface versus condensation level), and precipitation efficiency and vertical velocity in updrafts that are not necessarily constant with temperature. But most importantly, the dynamical contribution of orography to precipitation in the fall over this area during the so-called "Cevenoles" events, explains the hook shape of the scaling of precipitation extremes.

  7. The Top 10 Challenges in Extreme-Scale Visual Analytics

    PubMed Central

    Wong, Pak Chung; Shen, Han-Wei; Johnson, Christopher R.; Chen, Chaomei; Ross, Robert B.

    2013-01-01

    In this issue of CG&A, researchers share their R&D findings and results on applying visual analytics (VA) to extreme-scale data. Having surveyed these articles and other R&D in this field, we’ve identified what we consider the top challenges of extreme-scale VA. To cater to the magazine’s diverse readership, our discussion evaluates challenges in all areas of the field, including algorithms, hardware, software, engineering, and social issues. PMID:24489426

  8. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  9. Gradient spectral analysis of solar radio flare superevents

    NASA Astrophysics Data System (ADS)

    Rosa, R. R.; Veronese, T. B.; Sych, R. A.; Bolzan, M. A.; Sandri, S. A.; Drummond, I. A.; Becceneri, J. C.; Sawant, H. S.

    2011-12-01

    Some of complex solar active regions exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their evolution. Usually, extreme radio emission is driven by a latent nonlinear process involving magnetic reconnection among coronal loops and such extreme events (e.g., X-class flares and coronal mass ejections) express the presence of plasma and magnetic activity usually hidden inside the solar convective layer. Recently, the scaling exponent obtained from Detrended Fluctuation Analysis has been used to characterize the formation of solar flare superevents - SFS (integrated flux of radiation greater than 1.5 J/m2) when it is observed in the decimetric range of 1-3 GHz (Veronese et al., 2011). Here, we show a complementary computational analisys of four X-class solar flares observed in 17GHz from Nobeyama Radioheliograph. Our analysis is based on the combination of DFA and Gradient Spectral Analysis (GSA) which can be used to characterize the evolution of SFSs under the condition that the emission threshold is large enough (fmax > 300 S.F.U.) and the solar flux unit variability is greater than 50% of the average taken from the minimum flux to the extreme value. Preliminary studies of the gradient spectra of Nobeyama data in 17 GHz can be found in Sawant et al. (JASTP 73(11), 2011). Future applications of GSA on the images which will be observed from the Brazilian Decimetric Array (BDA) are discusssed.

  10. Spatial correlation of the dynamic propensity of a glass-forming liquid

    NASA Astrophysics Data System (ADS)

    Razul, M. Shajahan G.; Matharoo, Gurpreet S.; Poole, Peter H.

    2011-06-01

    We present computer simulation results on the dynamic propensity (as defined by Widmer-Cooper et al 2004 Phys. Rev. Lett. 93 135701) in a Kob-Andersen binary Lennard-Jones liquid system consisting of 8788 particles. We compute the spatial correlation function for the dynamic propensity as a function of both the reduced temperature T, and the time scale on which the particle displacements are measured. For T <= 0.6, we find that non-zero correlations occur at the largest length scale accessible in our system. We also show that a cluster-size analysis of particles with extremal values of the dynamic propensity, as well as 3D visualizations, reveal spatially correlated regions that approach the size of our system as T decreases, consistently with the behavior of the spatial correlation function. Next, we define and examine the 'coordination propensity', the isoconfigurational average of the coordination number of the minority B particles around the majority A particles. We show that a significant correlation exists between the spatial fluctuations of the dynamic and coordination propensities. In addition, we find non-zero correlations of the coordination propensity occurring at the largest length scale accessible in our system for all T in the range 0.466 < T < 1.0. We discuss the implications of these results for understanding the length scales of dynamical heterogeneity in glass-forming liquids.

  11. Identifying all moiety conservation laws in genome-scale metabolic networks.

    PubMed

    De Martino, Andrea; De Martino, Daniele; Mulet, Roberto; Pagnani, Andrea

    2014-01-01

    The stoichiometry of a metabolic network gives rise to a set of conservation laws for the aggregate level of specific pools of metabolites, which, on one hand, pose dynamical constraints that cross-link the variations of metabolite concentrations and, on the other, provide key insight into a cell's metabolic production capabilities. When the conserved quantity identifies with a chemical moiety, extracting all such conservation laws from the stoichiometry amounts to finding all non-negative integer solutions of a linear system, a programming problem known to be NP-hard. We present an efficient strategy to compute the complete set of integer conservation laws of a genome-scale stoichiometric matrix, also providing a certificate for correctness and maximality of the solution. Our method is deployed for the analysis of moiety conservation relationships in two large-scale reconstructions of the metabolism of the bacterium E. coli, in six tissue-specific human metabolic networks, and, finally, in the human reactome as a whole, revealing that bacterial metabolism could be evolutionarily designed to cover broader production spectra than human metabolism. Convergence to the full set of moiety conservation laws in each case is achieved in extremely reduced computing times. In addition, we uncover a scaling relation that links the size of the independent pool basis to the number of metabolites, for which we present an analytical explanation.

  12. Multiscale image contrast amplification (MUSICA)

    NASA Astrophysics Data System (ADS)

    Vuylsteke, Pieter; Schoeters, Emile P.

    1994-05-01

    This article presents a novel approach to the problem of detail contrast enhancement, based on multiresolution representation of the original image. The image is decomposed into a weighted sum of smooth, localized, 2D basis functions at multiple scales. Each transform coefficient represents the amount of local detail at some specific scale and at a specific position in the image. Detail contrast is enhanced by non-linear amplification of the transform coefficients. An inverse transform is then applied to the modified coefficients. This yields a uniformly contrast- enhanced image without artefacts. The MUSICA-algorithm is being applied routinely to computed radiography images of chest, skull, spine, shoulder, pelvis, extremities, and abdomen examinations, with excellent acceptance. It is useful for a wide range of applications in the medical, graphical, and industrial area.

  13. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less

  14. Characteristics of atmospheric circulation patterns associated with extreme temperatures over North America in observations and climate models

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.

    Motivated by a desire to understand the physical mechanisms involved in future anthropogenic changes in extreme temperature events, the key atmospheric circulation patterns associated with extreme daily temperatures over North America in the current climate are identified. Several novel metrics are used to systematically identify and describe these patterns for the entire continent. The orientation, physical characteristics, and spatial scale of these circulation patterns vary based on latitude, season, and proximity to important geographic features (i.e., mountains, coastlines). The anomaly patterns associated with extreme cold events tend to be similar to, but opposite in sign of, those associated with extreme warm events, especially within the westerlies, and tend to scale with temperature in the same locations. The influence of the Pacific North American (PNA) pattern, the Northern Annular Mode (NAM), and the El Niño-Southern Oscillation (ENSO) on extreme temperature days and months shows that associations between extreme temperatures and the PNA and NAM are stronger than associations with ENSO. In general, the association with extremes tends to be stronger on monthly than daily time scales. Extreme temperatures are associated with the PNA and NAM in locations typically influenced by these circulation patterns; however many extremes still occur on days when the amplitude and polarity of these patterns do not favor their occurrence. In winter, synoptic-scale, transient weather disturbances are important drivers of extreme temperature days; however these smaller-scale events are often concurrent with amplified PNA or NAM patterns. Associations are weaker in summer when other physical mechanisms affecting the surface energy balance, such as anomalous soil moisture content, are associated with extreme temperatures. Analysis of historical runs from seventeen climate models from the CMIP5 database suggests that most models simulate realistic circulation patterns associated with extreme temperature days in most places. Model-simulated patterns tend to resemble observed patterns better in the winter than the summer and at 500 hPa than at the surface. There is substantial variability among the suite of models analyzed and most models simulate circulation patterns more realistically away from influential features such as large bodies of water and complex topography.

  15. Computational challenges in magnetic-confinement fusion physics

    NASA Astrophysics Data System (ADS)

    Fasoli, A.; Brunner, S.; Cooper, W. A.; Graves, J. P.; Ricci, P.; Sauter, O.; Villard, L.

    2016-05-01

    Magnetic-fusion plasmas are complex self-organized systems with an extremely wide range of spatial and temporal scales, from the electron-orbit scales (~10-11 s, ~ 10-5 m) to the diffusion time of electrical current through the plasma (~102 s) and the distance along the magnetic field between two solid surfaces in the region that determines the plasma-wall interactions (~100 m). The description of the individual phenomena and of the nonlinear coupling between them involves a hierarchy of models, which, when applied to realistic configurations, require the most advanced numerical techniques and algorithms and the use of state-of-the-art high-performance computers. The common thread of such models resides in the fact that the plasma components are at the same time sources of electromagnetic fields, via the charge and current densities that they generate, and subject to the action of electromagnetic fields. This leads to a wide variety of plasma modes of oscillations that resonate with the particle or fluid motion and makes the plasma dynamics much richer than that of conventional, neutral fluids.

  16. Computational discovery of extremal microstructure families

    PubMed Central

    Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech

    2018-01-01

    Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124

  17. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  18. Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review

    PubMed Central

    2010-01-01

    Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome). Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles) fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation. PMID:20429925

  19. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  20. Projected changes to precipitation extremes over the Canadian Prairies using multi-RCM ensemble

    NASA Astrophysics Data System (ADS)

    Masud, M. B.; Khaliq, M. N.; Wheater, H. S.

    2016-12-01

    Information on projected changes to precipitation extremes is needed for future planning of urban drainage infrastructure and storm water management systems and to sustain socio-economic activities and ecosystems at local, regional and other scales of interest. This study explores the projected changes to seasonal (April-October) precipitation extremes at daily, hourly and sub-hourly scales over the Canadian Prairie Provinces of Alberta, Saskatchewan, and Manitoba, based on the North American Regional Climate Change Assessment Program multi-Regional Climate Model (RCM) ensemble and regional frequency analysis. The performance of each RCM is evaluated regarding boundary and performance errors to study various sources of uncertainties and the impact of large-scale driving fields. In the absence of RCM-simulated short-duration extremes, a framework is developed to derive changes to extremes of these durations. Results from this research reveal that the relative changes in sub-hourly extremes are higher than those in the hourly and daily extremes. Overall, projected changes in precipitation extremes are larger for southeastern parts of this region than southern and northern areas, and smaller for southwestern and western parts of the study area. Keywords: climate change, precipitation extremes, regional frequency analysis, NARCCAP, Canadian Prairie provinces

  1. Evaluating the Large-Scale Environment of Extreme Events Using Reanalyses

    NASA Astrophysics Data System (ADS)

    Bosilovich, M. G.; Schubert, S. D.; Koster, R. D.; da Silva, A. M., Jr.; Eichmann, A.

    2014-12-01

    Extreme conditions and events have always been a long standing concern in weather forecasting and national security. While some evidence indicates extreme weather will increase in global change scenarios, extremes are often related to the large scale atmospheric circulation, but also occurring infrequently. Reanalyses assimilate substantial amounts of weather data and a primary strength of reanalysis data is the representation of the large-scale atmospheric environment. In this effort, we link the occurrences of extreme events or climate indicators to the underlying regional and global weather patterns. Now, with greater than 3o years of data, reanalyses can include multiple cases of extreme events, and thereby identify commonality among the weather to better characterize the large-scale to global environment linked to the indicator or extreme event. Since these features are certainly regionally dependent, and also, the indicators of climate are continually being developed, we outline various methods to analyze the reanalysis data and the development of tools to support regional evaluation of the data. Here, we provide some examples of both individual case studies and composite studies of similar events. For example, we will compare the large scale environment for Northeastern US extreme precipitation with that of highest mean precipitation seasons. Likewise, southerly winds can shown to be a major contributor to very warm days in the Northeast winter. While most of our development has involved NASA's MERRA reanalysis, we are also looking forward to MERRA-2 which includes several new features that greatly improve the representation of weather and climate, especially for the regions and sectors involved in the National Climate Assessment.

  2. Fast laboratory-based micro-computed tomography for pore-scale research: Illustrative experiments and perspectives on the future

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Boone, Marijn A.; Boone, Matthieu N.; De Schryver, Thomas; Masschaele, Bert; Van Hoorebeke, Luc; Cnudde, Veerle

    2016-09-01

    Over the past decade, the wide-spread implementation of laboratory-based X-ray micro-computed tomography (micro-CT) scanners has revolutionized both the experimental and numerical research on pore-scale transport in geological materials. The availability of these scanners has opened up the possibility to image a rock's pore space in 3D almost routinely to many researchers. While challenges do persist in this field, we treat the next frontier in laboratory-based micro-CT scanning: in-situ, time-resolved imaging of dynamic processes. Extremely fast (even sub-second) micro-CT imaging has become possible at synchrotron facilities over the last few years, however, the restricted accessibility of synchrotrons limits the amount of experiments which can be performed. The much smaller X-ray flux in laboratory-based systems bounds the time resolution which can be attained at these facilities. Nevertheless, progress is being made to improve the quality of measurements performed on the sub-minute time scale. We illustrate this by presenting cutting-edge pore scale experiments visualizing two-phase flow and solute transport in real-time with a lab-based environmental micro-CT set-up. To outline the current state of this young field and its relevance to pore-scale transport research, we critically examine its current bottlenecks and their possible solutions, both on the hardware and the software level. Further developments in laboratory-based, time-resolved imaging could prove greatly beneficial to our understanding of transport behavior in geological materials and to the improvement of pore-scale modeling by providing valuable validation.

  3. North American Extreme Temperature Events and Related Large Scale Meteorological Patterns: A Review of Statistical Methods, Dynamics, Modeling, and Trends

    NASA Technical Reports Server (NTRS)

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael G.; Gershunov, Alexander; Gutowski, William J., Jr.; Gyakum, John R.; Katz, Richard W.; hide

    2015-01-01

    The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and landatmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.

  4. Minimum important differences for the patient-specific functional scale, 4 region-specific outcome measures, and the numeric pain rating scale.

    PubMed

    Abbott, J Haxby; Schmitt, John

    2014-08-01

    Multicenter, prospective, longitudinal cohort study. To investigate the minimum important difference (MID) of the Patient-Specific Functional Scale (PSFS), 4 region-specific outcome measures, and the numeric pain rating scale (NPRS) across 3 levels of patient-perceived global rating of change in a clinical setting. The MID varies depending on the external anchor defining patient-perceived "importance." The MID for the PSFS has not been established across all body regions. One thousand seven hundred eight consecutive patients with musculoskeletal disorders were recruited from 5 physical therapy clinics. The PSFS, NPRS, and 4 region-specific outcome measures-the Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale-were assessed at the initial and final physical therapy visits. Global rating of change was assessed at the final visit. MID was calculated for the PSFS and NPRS (overall and for each body region), and for each region-specific outcome measure, across 3 levels of change defined by the global rating of change (small, medium, large change) using receiver operating characteristic curve methodology. The MID for the PSFS (on a scale from 0 to 10) ranged from 1.3 (small change) to 2.3 (medium change) to 2.7 (large change), and was relatively stable across body regions. MIDs for the NPRS (-1.5 to -3.5), Oswestry Disability Index (-12), Neck Disability Index (-14), Upper Extremity Functional Index (6 to 11), and Lower Extremity Functional Scale (9 to 16) are also reported. We reported the MID for small, medium, and large patient-perceived change on the PSFS, NPRS, Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale for use in clinical practice and research.

  5. Turbulent transport measurements with a laser Doppler velocimeter

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.

    1972-01-01

    The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectrum is obtained in terms of the space time correlation function of the fluid. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time average spectrum contains information only for times shorter than the Lagrangian integral time scale of the turbulence. To examine the long time behavior, one must use either extremely small scattering angles, much longer wavelength radiation or a different mode of signal analysis, e.g., FM detection.

  6. Rasch validation of the Arabic version of the lower extremity functional scale.

    PubMed

    Alnahdi, Ali H

    2018-02-01

    The purpose of this study was to examine the internal construct validity of the Arabic version of the Lower Extremity Functional Scale (20-item Arabic LEFS) using Rasch analysis. Patients (n = 170) with lower extremity musculoskeletal dysfunction were recruited. Rasch analysis of 20-item Arabic LEFS was performed. Once the initial Rasch analysis indicated that the 20-item Arabic LEFS did not fit the Rasch model, follow-up analyses were conducted to improve the fit of the scale to the Rasch measurement model. These modifications included removing misfitting individuals, changing item scoring structure, removing misfitting items, addressing bias caused by response dependency between items and differential item functioning (DIF). Initial analysis indicated deviation of the 20-item Arabic LEFS from the Rasch model. Disordered thresholds in eight items and response dependency between six items were detected with the scale as a whole did not meet the requirement of unidimensionality. Refinements led to a 15-item Arabic LEFS that demonstrated excellent internal consistency (person separation index [PSI] = 0.92) and satisfied all the requirement of the Rasch model. Rasch analysis did not support the 20-item Arabic LEFS as a unidimensional measure of lower extremity function. The refined 15-item Arabic LEFS met all the requirement of the Rasch model and hence is a valid objective measure of lower extremity function. The Rasch-validated 15-item Arabic LEFS needs to be further tested in an independent sample to confirm its fit to the Rasch measurement model. Implications for Rehabilitation The validity of the 20-item Arabic Lower Extremity Functional Scale to measure lower extremity function is not supported. The 15-item Arabic version of the LEFS is a valid measure of lower extremity function and can be used to quantify lower extremity function in patients with lower extremity musculoskeletal disorders.

  7. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  8. Framework for Detection and Localization of Extreme Climate Event with Pixel Recursive Super Resolution

    NASA Astrophysics Data System (ADS)

    Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.

    2017-12-01

    Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.

  9. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    DOE PAGES

    Grotjahn, Richard; Black, Robert; Leung, Ruby; ...

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less

  10. High Fidelity Simulations of Large-Scale Wireless Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma; Benz, Zachary

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulationsmore » (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.« less

  11. Modelling realistic TiO2 nanospheres: A benchmark study of SCC-DFTB against hybrid DFT

    NASA Astrophysics Data System (ADS)

    Selli, Daniele; Fazio, Gianluca; Di Valentin, Cristiana

    2017-10-01

    TiO2 nanoparticles (NPs) are nowadays considered fundamental building blocks for many technological applications. Morphology is found to play a key role with spherical NPs presenting higher binding properties and chemical activity. From the experimental point of view, the characterization of these nano-objects is extremely complex, opening a large room for computational investigations. In this work, TiO2 spherical NPs of different sizes (from 300 to 4000 atoms) have been studied with a two-scale computational approach. Global optimization to obtain stable and equilibrated nanospheres was performed with a self-consistent charge density functional tight-binding (SCC-DFTB) simulated annealing process, causing a considerable atomic rearrangement within the nanospheres. Those SCC-DFTB relaxed structures have been then optimized at the DFT(B3LYP) level of theory. We present a systematic and comparative SCC-DFTB vs DFT(B3LYP) study of the structural properties, with particular emphasis on the surface-to-bulk sites ratio, coordination distribution of surface sites, and surface energy. From the electronic point of view, we compare HOMO-LUMO and Kohn-Sham gaps, total and projected density of states. Overall, the comparisons between DFTB and hybrid density functional theory show that DFTB provides a rather accurate geometrical and electronic description of these nanospheres of realistic size (up to a diameter of 4.4 nm) at an extremely reduced computational cost. This opens for new challenges in simulations of very large systems and more extended molecular dynamics.

  12. Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method.

    PubMed

    Matsushima, Kyoji; Nakahara, Sumio

    2009-12-01

    A large-scale full-parallax computer-generated hologram (CGH) with four billion (2(16) x 2(16)) pixels is created to reconstruct a fine true 3D image of a scene, with occlusions. The polygon-based method numerically generates the object field of a surface object, whose shape is provided by a set of vertex data of polygonal facets, while the silhouette method makes it possible to reconstruct the occluded scene. A novel technique using the segmented frame buffer is presented for handling and propagating large wave fields even in the case where the whole wave field cannot be stored in memory. We demonstrate that the full-parallax CGH, calculated by the proposed method and fabricated by a laser lithography system, reconstructs a fine 3D image accompanied by a strong sensation of depth.

  13. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  14. Fast and Epsilon-Optimal Discretized Pursuit Learning Automata.

    PubMed

    Zhang, JunQi; Wang, Cheng; Zhou, MengChu

    2015-10-01

    Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ε -optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ε -optimality, fast convergence speed, and low computational complexity for each iteration.

  15. Final Report. Institute for Ultralscale Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu; Galli, Giulia; Gygi, Francois

    The SciDAC Institute for Ultrascale Visualization brought together leading experts from visualization, high-performance computing, and science application areas to make advanced visualization solutions for SciDAC scientists and the broader community. Over the five-year project, the Institute introduced many new enabling visualization techniques, which have significantly enhanced scientists’ ability to validate their simulations, interpret their data, and communicate with others about their work and findings. This Institute project involved a large number of junior and student researchers, who received the opportunities to work on some of the most challenging science applications and gain access to the most powerful high-performance computing facilitiesmore » in the world. They were readily trained and prepared for facing the greater challenges presented by extreme-scale computing. The Institute’s outreach efforts, through publications, workshops and tutorials, successfully disseminated the new knowledge and technologies to the SciDAC and the broader scientific communities. The scientific findings and experience of the Institute team helped plan the SciDAC3 program.« less

  16. Characterization of a laboratory model of computer mouse use - applications for studying risk factors for musculoskeletal disorders.

    PubMed

    Flodgren, G; Heiden, M; Lyskov, E; Crenshaw, A G

    2007-03-01

    In the present study, we assessed the wrist kinetics (range of motion, mean position, velocity and mean power frequency in radial/ulnar deviation, flexion/extension, and pronation/supination) associated with performing a mouse-operated computerized task involving painting rectangles on a computer screen. Furthermore, we evaluated the effects of the painting task on subjective perception of fatigue and wrist position sense. The results showed that the painting task required constrained wrist movements, and repetitive movements of about the same magnitude as those performed in mouse-operated design tasks. In addition, the painting task induced a perception of muscle fatigue in the upper extremity (Borg CR-scale: 3.5, p<0.001) and caused a reduction in the position sense accuracy of the wrist (error before: 4.6 degrees , error after: 5.6 degrees , p<0.05). This standardized painting task appears suitable for studying relevant risk factors, and therefore it offers a potential for investigating the pathophysiological mechanisms behind musculoskeletal disorders related to computer mouse use.

  17. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  18. Identification of large-scale meteorological patterns associated with extreme precipitation in the US northeast

    NASA Astrophysics Data System (ADS)

    Agel, Laurie; Barlow, Mathew; Feldstein, Steven B.; Gutowski, William J.

    2018-03-01

    Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. The tropopause height provides a compact representation of the upper-tropospheric potential vorticity, which is closely related to the overall evolution and intensity of weather systems. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into the overall context of patterns for all days. Six tropopause patterns are identified through KMC for extreme day precipitation: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong moisture transport preceding, and upward motion during, extreme precipitation events.

  19. Solar Weather Ice Monitoring Station (SWIMS). A low cost, extreme/harsh environment, solar powered, autonomous sensor data gathering and transmission system

    NASA Astrophysics Data System (ADS)

    Chetty, S.; Field, L. A.

    2013-12-01

    The Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally respectful materials that when deployed will increase the albedo, enhancing the formation and/preservation of multi-year ice. Small scale deployments using various materials have been done in Canada, California's Sierra Nevada Mountains and a pond in Minnesota to test the albedo performance and environmental characteristics of these materials. SWIMS is a sophisticated autonomous sensor system being developed to measure the albedo, weather, water temperature and other environmental parameters. The system (SWIMS) employs low cost, high accuracy/precision sensors, high resolution cameras, and an extreme environment command and data handling computer system using satellite and terrestrial wireless communication. The entire system is solar powered with redundant battery backup on a floating buoy platform engineered for low temperature (-40C) and high wind conditions. The system also incorporates tilt sensors, sonar based ice thickness sensors and a weather station. To keep the costs low, each SWIMS unit measures incoming and reflected radiation from the four quadrants around the buoy. This allows data from four sets of sensors, cameras, weather station, water temperature probe to be collected and transmitted by a single on-board solar powered computer. This presentation covers the technical, logistical and cost challenges in designing, developing and deploying these stations in remote, extreme environments. Image captured by camera #3 of setting sun on the SWIMS station One of the images captured by SWIMS Camera #4

  20. The relationship among computer work, environmental design, and musculoskeletal and visual discomfort: examining the moderating role of supervisory relations and co-worker support.

    PubMed

    Robertson, Michelle M; Huang, Yueng-Hsiang; Larson, Nancy

    2016-01-01

    The prevalence of work-related upper extremity musculoskeletal disorders and visual symptoms reported in the USA has increased dramatically during the past two decades. This study examined the factors of computer use, workspace design, psychosocial factors, and organizational ergonomics resources on musculoskeletal and visual discomfort and their impact on the safety and health of computer work employees. A large-scale, cross-sectional survey was administered to a US manufacturing company to investigate these relationships (n = 1259). Associations between these study variables were tested along with moderating effects framed within a conceptual model. Significant relationships were found between computer use and psychosocial factors of co-worker support and supervisory relations with visual and musculoskeletal discomfort. Co-worker support was found to be significantly related to reports of eyestrain, headaches, and musculoskeletal discomfort. Supervisor relations partially moderated the relationship between workspace design satisfaction and visual and musculoskeletal discomfort. This study provides guidance for developing systematic, preventive measures and recommendations in designing office ergonomics interventions with the goal of reducing musculoskeletal and visual discomfort while enhancing office and computer workers' performance and safety.

  1. Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bautista-Gomez, Leonardo; Cappello, Franck

    Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less

  2. Influence of spatial and temporal scales in identifying temperature extremes

    NASA Astrophysics Data System (ADS)

    van Eck, Christel M.; Friedlingstein, Pierre; Mulder, Vera L.; Regnier, Pierre A. G.

    2016-04-01

    Extreme heat events are becoming more frequent. Notable are severe heatwaves such as the European heatwave of 2003, the Russian heat wave of 2010 and the Australian heatwave of 2013. Surface temperature is attaining new maxima not only during the summer but also during the winter. The year of 2015 is reported to be a temperature record breaking year for both summer and winter. These extreme temperatures are taking their human and environmental toll, emphasizing the need for an accurate method to define a heat extreme in order to fully understand the spatial and temporal spread of an extreme and its impact. This research aims to explore how the use of different spatial and temporal scales influences the identification of a heat extreme. For this purpose, two near-surface temperature datasets of different temporal scale and spatial scale are being used. First, the daily ERA-Interim dataset of 0.25 degree and a time span of 32 years (1979-2010). Second, the daily Princeton Meteorological Forcing Dataset of 0.5 degree and a time span of 63 years (1948-2010). A temperature is considered extreme anomalous when it is surpassing the 90th, 95th, or the 99th percentile threshold based on the aforementioned pre-processed datasets. The analysis is conducted on a global scale, dividing the world in IPCC's so-called SREX regions developed for the analysis of extreme climate events. Pre-processing is done by detrending and/or subtracting the monthly climatology based on 32 years of data for both datasets and on 63 years of data for only the Princeton Meteorological Forcing Dataset. This results in 6 datasets of temperature anomalies from which the location in time and space of the anomalous warm days are identified. Comparison of the differences between these 6 datasets in terms of absolute threshold temperatures for extremes and the temporal and spatial spread of the extreme anomalous warm days show a dependence of the results on the datasets and methodology used. This stresses the need for a careful selection of data and methodology when identifying heat extremes.

  3. Computation of high-resolution SAR distributions in a head due to a radiating dipole antenna representing a hand-held mobile phone.

    PubMed

    Van de Kamer, J B; Lagendijk, J J W

    2002-05-21

    SAR distributions in a healthy female adult head as a result of a radiating vertical dipole antenna (frequency 915 MHz) representing a hand-held mobile phone have been computed for three different resolutions: 2 mm, 1 mm and 0.4 mm. The extremely high resolution of 0.4 mm was obtained with our quasistatic zooming technique, which is briefly described in this paper. For an effectively transmitted power of 0.25 W, the maximum averaged SAR values in both cubic- and arbitrary-shaped volumes are, respectively, about 1.72 and 2.55 W kg(-1) for 1 g and 0.98 and 1.73 W kg(-1) for 10 g of tissue. These numbers do not vary much (<8%) for the different resolutions, indicating that SAR computations at a resolution of 2 mm are sufficiently accurate to describe the large-scale distribution. However, considering the detailed SAR pattern in the head, large differences may occur if high-resolution computations are performed rather than low-resolution ones. These deviations are caused by both increased modelling accuracy and improved anatomical description in higher resolution simulations. For example, the SAR profile across a boundary between tissues with high dielectric contrast is much more accurately described at higher resolutions. Furthermore, low-resolution dielectric geometries may suffer from loss of anatomical detail, which greatly affects small-scale SAR distributions. Thus. for strongly inhomogeneous regions high-resolution SAR modelling is an absolute necessity.

  4. Sensitivity to change of mobility measures in musculoskeletal conditions on lower extremities in outpatient rehabilitation settings.

    PubMed

    Navarro-Pujalte, Esther; Gacto-Sánchez, Mariano; Montilla-Herrador, Joaquina; Escolar-Reina, Pilar; Ángeles Franco-Sierra, María; Medina-Mirapeix, Francesc

    2018-01-12

    Prospective longitudinal study. To examine the sensitivity of the Mobility Activities Measure for lower extremities and to compare it to the sensitivity of the Physical Functioning Scale (PF-10) and the Patient-Specific Functional Scale (PSFS) at week 4 and week 8 post-hospitalization in outpatient rehabilitation settings. Mobility Activities Measure is a set of short mobility measures to track outpatient rehabilitation progress: its scales have shown good properties but its sensitivity to change has not been reported. Patients with musculoskeletal conditions were recruited at admission in three outpatient rehabilitation settings in Spain. Data were collected at admission, week 4 and week 8 from an initial sample of 236 patients (mean age ± SD = 36.7 ± 11.1). Mobility Activities Measure scales for lower extremity; PF-10; and PSFS. All the Mobility Activities Measure scales were sensitive to both positive and negative changes (the Standardized Response Means (SRMs) ranged between 1.05 and 1.53 at week 4, and between 0.63 and 1.47 at week 8). The summary measure encompassing the three Mobility Activities Measure scales detected a higher proportion of participants who had improved beyond the minimal detectable change (MDC) than detected by the PSFS and the PF-10 both at week 4 (86.64% vs. 69.81% and 42.23%, respectively) and week 8 (71.14% vs. 55.65% and 60.81%, respectively). The three Mobility Activities Measure scales assessing the lower extremity can be used across outpatient rehabilitation settings to provide consistent and sensitive measures of changes in patients' mobility. Implications for rehabilitation All the scales of the Mobility Activities Measure for the lower extremity were sensitive to both positive and negative change across the follow-up periods. Overall, the summary measure encompassing the three Mobility Activities Measure scales for the lower extremity appeared more sensitive to positive changes than the Physical Functioning Scale, especially during the first four weeks of treatment. The summary measure also detected a higher percentage of participants with positive change that exceeded the minimal detectable change than the Patient-Specific Functional Scale and the Physical Functioning Scale at the first follow-up period. By demonstrating their consistency and sensitivity to change, the three Mobility Activities Measures scales can now be considered in order to track patients' functional progress. Mobility Activities Measure can be therefore used in patients with musculoskeletal conditions across outpatient rehabilitation settings to provide estimates of change in mobility activities focusing on the lower extremity.

  5. Temporal Clustering of Regional-Scale Extreme Precipitation Events in Southern Switzerland

    NASA Astrophysics Data System (ADS)

    Barton, Yannick; Giannakaki, Paraskevi; Von Waldow, Harald; Chevalier, Clément; Pfhal, Stephan; Martius, Olivia

    2017-04-01

    Temporal clustering of extreme precipitation events on subseasonal time scales is a form of compound extremes and is of crucial importance for the formation of large-scale flood events. Here, the temporal clustering of regional-scale extreme precipitation events in southern Switzerland is studied. These precipitation events are relevant for the flooding of lakes in southern Switzerland and northern Italy. This research determines whether temporal clustering is present and then identifies the dynamics that are responsible for the clustering. An observation-based gridded precipitation dataset of Swiss daily rainfall sums and ECMWF reanalysis datasets are used. To analyze the clustering in the precipitation time series a modified version of Ripley's K function is used. It determines the average number of extreme events in a time period, to characterize temporal clustering on subseasonal time scales and to determine the statistical significance of the clustering. Significant clustering of regional-scale precipitation extremes is found on subseasonal time scales during the fall season. Four high-impact clustering episodes are then selected and the dynamics responsible for the clustering are examined. During the four clustering episodes, all heavy precipitation events were associated with an upperlevel breaking Rossby wave over western Europe and in most cases strong diabatic processes upstream over the Atlantic played a role in the amplification of these breaking waves. Atmospheric blocking downstream over eastern Europe supported this wave breaking during two of the clustering episodes. During one of the clustering periods, several extratropical transitions of tropical cyclones in the Atlantic contributed to the formation of high-amplitude ridges over the Atlantic basin and downstream wave breaking. During another event, blocking over Alaska assisted the phase locking of the Rossby waves downstream over the Atlantic.

  6. A simple scaling approach to produce climate scenarios of local precipitation extremes for the Netherlands

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Attema, Jisk

    2015-08-01

    Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.

  7. Opening Comments: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2009-07-01

    Welcome to San Diego and the 2009 SciDAC conference. Over the next four days, I would like to present an assessment of the SciDAC program. We will look at where we've been, how we got to where we are and where we are going in the future. Our vision is to be first in computational science, to be best in class in modeling and simulation. When Ray Orbach asked me what I would do, in my job interview for the SciDAC Director position, I said we would achieve that vision. And with our collective dedicated efforts, we have managed to achieve this vision. In the last year, we have now the most powerful supercomputer for open science, Jaguar, the Cray XT system at the Oak Ridge Leadership Computing Facility (OLCF). We also have NERSC, probably the best-in-the-world program for productivity in science that the Office of Science so depends on. And the Argonne Leadership Computing Facility offers architectural diversity with its IBM Blue Gene/P system as a counterbalance to Oak Ridge. There is also ESnet, which is often understated—the 40 gigabit per second dual backbone ring that connects all the labs and many DOE sites. In the President's Recovery Act funding, there is exciting news that ESnet is going to build out to a 100 gigabit per second network using new optical technologies. This is very exciting news for simulations and large-scale scientific facilities. But as one noted SciDAC luminary said, it's not all about the computers—it's also about the science—and we are also achieving our vision in this area. Together with having the fastest supercomputer for science, at the SC08 conference, SciDAC researchers won two ACM Gordon Bell Prizes for the outstanding performance of their applications. The DCA++ code, which solves some very interesting problems in materials, achieved a sustained performance of 1.3 petaflops, an astounding result and a mark I suspect will last for some time. The LS3DF application for studying nanomaterials also required the development of a new and novel algorithm to produce results up to 400 times faster than a similar application, and was recognized with a prize for algorithm innovation—a remarkable achievement. Day one of our conference will include examples of petascale science enabled at the OLCF. Although Jaguar has not been officially commissioned, it has gone through its acceptance tests, and during its shakedown phase there have been pioneer applications used for the acceptance tests, and they are running at scale. These include applications in the areas of astrophysics, biology, chemistry, combustion, fusion, geosciences, materials science, nuclear energy and nuclear physics. We also have a whole compendium of science we do at our facilities; these have been documented and reviewed at our last SciDAC conference. Many of these were highlighted in our Breakthroughs Report. One session at this week's conference will feature a cross-section of these breakthroughs. In the area of scalable electromagnetic simulations, the Auxiliary-space Maxwell Solver (AMS) uses specialized finite element discretizations and multigrid-based techniques, which decompose the original problem into easier-to-solve subproblems. Congratulations to the mathematicians on this. Another application on the list of breakthroughs was the authentication of PETSc, which provides scalable solvers used in many DOE applications and has solved problems with over 3 billion unknowns and scaled to over 16,000 processors on DOE leadership-class computers. This is becoming a very versatile and useful toolkit to achieve performance at scale. With the announcement of SIAM's first class of Fellows, we are remarkably well represented. Of the group of 191, more than 40 of these Fellows are in the 'DOE space.' We are so delighted that SIAM has recognized them for their many achievements. In the coming months, we will illustrate our leadership in applied math and computer science by looking at our contributions in the areas of programming models, development and performance tools, math libraries, system software, collaboration, and visualization and data analytics. This is a large and diverse list of libraries. We have asked for two panels, one chaired by David Keyes and composed of many of the nation's leading mathematicians, to produce a report on the most significant accomplishments in applied mathematics over the last eight years, taking us back to the start of the SciDAC program. In addition, we have a similar panel in computer science to be chaired by Kathy Yelick. They are going to identify the computer science accomplishments of the past eight years. These accomplishments are difficult to get a handle on, and I'm looking forward to this report. We will also have a follow-on to our report on breakthroughs in computational science and this will also go back eight years, looking at the many accomplishments under the SciDAC and INCITE programs. This will be chaired by Tony Mezzacappa. So, where are we going in the SciDAC program? It might help to take a look at computational science and how it got started. I go back to Ken Wilson, who made the model and has written on computational science and computational science education. His model was thus: The computational scientist plays the role of the experimentalist, and the math and CS researchers play the role of theorists, and the computers themselves are the experimental apparatus. And that in simulation science, we are carrying out numerical experiments as to the nature of physical and biological sciences. Peter Lax, in the same time frame, developed a report on large-scale computing in science and engineering. Peter remarked, 'Perhaps the most important applications of scientific computing come not in the solution of old problems, but in the discovery of new phenomena through numerical experimentation.' And in the early years, I think the person who provided the most guidance, the most innovation and the most vision for where the future might lie was Ed Oliver. Ed Oliver died last year. Ed did a number of things in science. He had this personality where he knew exactly what to do, but he preferred to stay out of the limelight so that others could enjoy the fruits of his vision. We in the SciDAC program and ASCR Facilities are still enjoying the benefits of his vision. We will miss him. Twenty years after Ken Wilson, Ray Orbach laid out the fundamental premise for SciDAC in an interview that appeared in SciDAC Review: 'SciDAC is unique in the world. There isn't any other program like it anywhere else, and it has the remarkable ability to do science by bringing together physical scientists, mathematicians, applied mathematicians, and computer scientists who recognize that computation is not something you do at the end, but rather it needs to be built into the solution of the very problem that one is addressing. ' As you look at the Lax report from 1982, it talks about how 'Future significant improvements may have to come from architectures embodying parallel processing elements—perhaps several thousands of processors.' And it continues, 'esearch in languages, algorithms and numerical analysis will be crucial in learning to exploit these new architectures fully.' In the early '90s, Sterling, Messina and Smith developed a workshop report on petascale computing and concluded, 'A petaflops computer system will be feasible in two decades, or less, and rely in part on the continual advancement of the semiconductor industry both in speed enhancement and cost reduction through improved fabrication processes.' So they were not wrong, and today we are embarking on a forward look that is at a different scale, the exascale, going to 1018 flops. In 2007, Stevens, Simon and Zacharia chaired a series of town hall meetings looking at exascale computing, and in their report wrote, 'Exascale computer systems are expected to be technologically feasible within the next 15 years, or perhaps sooner. These systems will push the envelope in a number of important technologies: processor architecture, scale of multicore integration, power management and packaging.' The concept of computing on the Jaguar computer involves hundreds of thousands of cores, as do the IBM systems that are currently out there. So the scale of computing with systems with billions of processors is staggering to me, and I don't know how the software and math folks feel about it. We have now embarked on a road toward extreme scale computing. We have created a series of town hall meetings and we are now in the process of holding workshops that address what I call within the DOE speak 'the mission need,' or what is the scientific justification for computing at that scale. We are going to have a total of 13 workshops. The workshops on climate, high energy physics, nuclear physics, fusion, and nuclear energy have been held. The report from the workshop on climate is actually out and available, and the other reports are being completed. The upcoming workshops are on biology, materials, and chemistry; and workshops that engage science for nuclear security are a partnership between NNSA and ASCR. There are additional workshops on applied math, computer science, and architecture that are needed for computing at the exascale. These extreme scale workshops will provide the foundation in our office, the Office of Science, the NNSA and DOE, and we will engage the National Science Foundation and the Department of Defense as partners. We envision a 10-year program for an exascale initiative. It will be an integrated R&D program initially—you can think about five years for research and development—that would be in hardware, operating systems, file systems, networking and so on, as well as software for applications. Application software and the operating system and the hardware all need to be bundled in this period so that at the end the system will execute the science applications at scale. We also believe that this process will have to have considerable investment from the manufacturers and vendors to be successful. We have formed laboratory, university and industry working groups to start this process and formed a panel to look at where SciDAC needs to go to compute at the extreme scale, and we have formed an executive committee within the Office of Science and the NNSA to focus on these activities. We will have outreach to DoD in the next few months. We are anticipating a solicitation within the next two years in which we will compete this bundled R&D process. We don't know how we will incorporate SciDAC into extreme scale computing, but we do know there will be many challenges. And as we have shown over the years, we have the expertise and determination to surmount these challenges.

  8. On the extreme value statistics of normal random matrices and 2D Coulomb gases: Universality and finite N corrections

    NASA Astrophysics Data System (ADS)

    Ebrahimi, R.; Zohren, S.

    2018-03-01

    In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.

  9. Spectral photometry of extreme helium stars: Ultraviolet fluxes and effective temperature

    NASA Technical Reports Server (NTRS)

    Drilling, J. S.; Schoenberner, D.; Heber, U.; Lynas-Gray, A. E.

    1982-01-01

    Ultraviolet flux distributions are presented for the extremely helium rich stars BD +10 deg 2179, HD 124448, LSS 3378, BD -9 deg 4395, LSE 78, HD 160641, LSIV -1 deg 2, BD 1 deg 3438, HD 168476, MV Sgr, LS IV-14 deg 109 (CD -35 deg 11760), LSII +33 deg 5 and BD +1 deg 4381 (LSIV +2 deg 13) obtained with the International Ultraviolet Explorer (IUE). Broad band photometry and a newly computed grid of line blanketed model atmospheres were used to determine accurate angular diameters and total stellar fluxes. The resultant effective temperatures are in most cases in satisfactory agreement with those based on broad band photometry and/or high resolution spectroscopy in the visible. For two objects, LSII +33 deg 5 and LSE 78, disagreement was found between the IUE observations and broadband photometry: the colors predict temperatures around 20,000 K, whereas the UV spectra indicate much lower photospheric temperatures of 14,000 to 15,000 K. The new temperature scale for extreme helium stars extends to lower effective temperatures than that of Heber and Schoenberner (1981) and covers the range from 8,500 K to 32,000 K.

  10. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    NASA Astrophysics Data System (ADS)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.

  11. Unilateral and bilateral upper extremity weight-bearing effect on upper extremity impairment and functional performance after brain injury

    PubMed Central

    REISTETTER, TIMOTHY; ABREU, BEATRIZ C.; BEAR-LEHMAN, JANE; OTTENBACHER, KENNETH J.

    2010-01-01

    The purpose of the study was to investigate the effect of upper extremity (UE) weight bearing on UE impairment functional performance of persons with acquired brain injury (BI). A quasi-experimental design was used to examine a convenience sample of 99 persons with acquired BI and 22 without BI (WBI) living in a community re-entry centre. A computerized force-sensing array pressure map system was used to determine the UE pressure during unilateral and bilateral conditions. Differences between groups were examined using t-tests. Correlations were computed between UE weight bearing and hand function, and functional performance as measured by the Fugl-Meyer scale and functional independence measure (FIM) scale. The group of people with BI exerted significantly lower UE weight bearing during unilateral conditions as compared with persons WBI [left: t (119) = 2.34, p = 0.021; right: t (119) = 4.79, p = 0.043). UE weight-bearing measures correlated strongly with FIM motor scores with bilateral UE conditions yielded the highest significant correlation (bilateral left r = 0.487, p < 0.001; bilateral right r = 0.469, p < 0.01). The results indicated that UE weight-bearing pressure differs in unilateral and bilateral conditions, between persons with and WBI and between persons with stroke and traumatic brain injury. These findings may have implications for occupational therapists that use unilateral versus bilateral motor training for rehabilitation. There is a need to replicate the study design with a randomized and stratified sample of persons with BI. PMID:19551694

  12. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model ismore » focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.« less

  13. Mapping fire probability and severity in a Mediterranean area using different weather and fuel moisture scenarios

    NASA Astrophysics Data System (ADS)

    Arca, B.; Salis, M.; Bacciu, V.; Duce, P.; Pellizzaro, G.; Ventura, A.; Spano, D.

    2009-04-01

    Although in many countries lightning is the main cause of ignition, in the Mediterranean Basin the forest fires are predominantly ignited by arson, or by human negligence. The fire season peaks coincide with extreme weather conditions (mainly strong winds, hot temperatures, low atmospheric water vapour content) and high tourist presence. Many works reported that in the Mediterranean Basin the projected impacts of climate change will cause greater weather variability and extreme weather conditions, with drier and hotter summers and heat waves. At long-term scale, climate changes could affect the fuel load and the dead/live fuel ratio, and therefore could change the vegetation flammability. At short-time scale, the increase of extreme weather events could directly affect fuel water status, and it could increase large fire occurrence. In this context, detecting the areas characterized by both high probability of large fire occurrence and high fire severity could represent an important component of the fire management planning. In this work we compared several fire probability and severity maps (fire occurrence, rate of spread, fireline intensity, flame length) obtained for a study area located in North Sardinia, Italy, using FlamMap simulator (USDA Forest Service, Missoula). FlamMap computes the potential fire behaviour characteristics over a defined landscape for given weather, wind and fuel moisture data. Different weather and fuel moisture scenarios were tested to predict the potential impact of climate changes on fire parameters. The study area, characterized by a mosaic of urban areas, protected areas, and other areas subject to anthropogenic disturbances, is mainly composed by fire-prone Mediterranean maquis. The input themes needed to run FlamMap were input as grid of 10 meters; the wind data, obtained using a computational fluid-dynamic model, were inserted as gridded file, with a resolution of 50 m. The analysis revealed high fire probability and severity in most of the areas, and therefore a high potential danger. The FlamMap outputs and the derived fire probability maps can be used in decision support systems for fire spread and behaviour and for fire danger assessment with actual and future fire regimes.

  14. International Halley Watch: Discipline specialists for large scale phenomena

    NASA Technical Reports Server (NTRS)

    Brandt, J. C.; Niedner, M. B., Jr.

    1986-01-01

    The largest scale structures of comets, their tails, are extremely interesting from a physical point of view, and some of their properties are among the most spectacular displayed by comets. Because the tail(s) is an important component part of a comet, the Large-Scale Phenomena (L-SP) Discipline was created as one of eight different observational methods in which Halley data would be encouraged and collected from all around the world under the aspices of the International Halley Watch (IHW). The L-SP Discipline Specialist (DS) Team resides at NASA/Goddard Space Flight Center under the leadership of John C. Brandt, Malcolm B. Niedner, and their team of image-processing and computer specialists; Jurgan Rahe at NASA Headquarters completes the formal DS science staff. The team has adopted the study of disconnection events (DE) as its principal science target, and it is because of the rapid changes which occur in connection with DE's that such extensive global coverage was deemed necessary to assemble a complete record.

  15. Stimulus-dependent spiking relationships with the EEG

    PubMed Central

    Snyder, Adam C.

    2015-01-01

    The development and refinement of noninvasive techniques for imaging neural activity is of paramount importance for human neuroscience. Currently, the most accessible and popular technique is electroencephalography (EEG). However, nearly all of what we know about the neural events that underlie EEG signals is based on inference, because of the dearth of studies that have simultaneously paired EEG recordings with direct recordings of single neurons. From the perspective of electrophysiologists there is growing interest in understanding how spiking activity coordinates with large-scale cortical networks. Evidence from recordings at both scales highlights that sensory neurons operate in very distinct states during spontaneous and visually evoked activity, which appear to form extremes in a continuum of coordination in neural networks. We hypothesized that individual neurons have idiosyncratic relationships to large-scale network activity indexed by EEG signals, owing to the neurons' distinct computational roles within the local circuitry. We tested this by recording neuronal populations in visual area V4 of rhesus macaques while we simultaneously recorded EEG. We found substantial heterogeneity in the timing and strength of spike-EEG relationships and that these relationships became more diverse during visual stimulation compared with the spontaneous state. The visual stimulus apparently shifts V4 neurons from a state in which they are relatively uniformly embedded in large-scale network activity to a state in which their distinct roles within the local population are more prominent, suggesting that the specific way in which individual neurons relate to EEG signals may hold clues regarding their computational roles. PMID:26108954

  16. Drought in the Horn of Africa: attribution of a damaging and repeating extreme event

    NASA Astrophysics Data System (ADS)

    Marthews, Toby; Otto, Friederike; Mitchell, Daniel; Dadson, Simon; Jones, Richard

    2015-04-01

    We have applied detection and attribution techniques to the severe drought that hit the Horn of Africa in 2014. The short rains failed in late 2013 in Kenya, South Sudan, Somalia and southern Ethiopia, leading to a very dry growing season January to March 2014, and subsequently to the current drought in many agricultural areas of the sub-region. We have made use of the weather@home project, which uses publicly-volunteered distributed computing to provide a large ensemble of simulations sufficient to sample regional climate uncertainty. Based on this, we have estimated the occurrence rates of the kinds of the rare and extreme events implicated in this large-scale drought. From land surface model runs based on these ensemble simulations, we have estimated the impacts of climate anomalies during this period and therefore we can reliably identify some factors of the ongoing drought as attributable to human-induced climate change. The UNFCCC's Adaptation Fund is attempting to support projects that bring about an adaptation to "the adverse effects of climate change", but in order to formulate such projects we need a much clearer way to assess how much climate change is human-induced and how much is a consequence of climate anomalies and large-scale teleconnections, which can only be provided by robust attribution techniques.

  17. Revisiting the synoptic-scale predictability of severe European winter storms using ECMWF ensemble reforecasts

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich

    2017-10-01

    New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.

  18. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  19. Psychometrics of the wrist stability and hand mobility subscales of the Fugl-Meyer assessment in moderately impaired stroke.

    PubMed

    Page, Stephen J; Hade, Erinn; Persch, Andrew

    2015-01-01

    There remains a need for a quickly administered, stroke-specific, bedside measure of active wrist and finger movement for the expanding stroke population. The wrist stability and hand mobility scales of the upper extremity Fugl-Meyer Assessment (w/h UE FM) constitute a valid, reliable measure of paretic UE impairment in patients with active wrist and finger movement. The aim of this study was to determine performance on the w/h UE FM in a stable cohort of survivors of stroke with only palpable movement in their paretic wrist flexors. A single-center cohort study was conducted. Thirty-two individuals exhibiting stable, moderate upper extremity hemiparesis (15 male, 17 female; mean age=56.6 years, SD=10.1; mean time since stroke=4.6 years, SD=5.8) participated in the study, which was conducted at an outpatient rehabilitation clinic in the midwestern United States. The w/h UE FM and Action Research Arm Test (ARAT) were administered twice. Intraclass correlation coefficients (ICCs), Cronbach alpha, and ordinal alpha were computed to determine reliability, and Spearman rank correlation coefficients and Bland-Altman plots were computed to establish validity. Intraclass correlation coefficients for the w/h UE FM and ARAT were .95 and .99, respectively. The w/h UE FM intrarater reliability and internal consistency were greater than .80, and concurrent validity was greater than .70. This also was the first stroke rehabilitative study to apply ordinal alpha to examine internal consistency values, revealing w/h UE FM levels greater than .85. Concurrent validity findings were corroborated by Bland-Altman plots. It appears that the w/h UE FM is a promising tool to measure distal upper extremity movement in patients with little active paretic wrist and finger movement. This finding widens the segment of patients on whom the w/h UE FM can be effectively used and addresses a gap, as commonly used measures necessitate active distal upper extremity movement. © 2015 American Physical Therapy Association.

  20. Emerging Nanophotonic Applications Explored with Advanced Scientific Parallel Computing

    NASA Astrophysics Data System (ADS)

    Meng, Xiang

    The domain of nanoscale optical science and technology is a combination of the classical world of electromagnetics and the quantum mechanical regime of atoms and molecules. Recent advancements in fabrication technology allows the optical structures to be scaled down to nanoscale size or even to the atomic level, which are far smaller than the wavelength they are designed for. These nanostructures can have unique, controllable, and tunable optical properties and their interactions with quantum materials can have important near-field and far-field optical response. Undoubtedly, these optical properties can have many important applications, ranging from the efficient and tunable light sources, detectors, filters, modulators, high-speed all-optical switches; to the next-generation classical and quantum computation, and biophotonic medical sensors. This emerging research of nanoscience, known as nanophotonics, is a highly interdisciplinary field requiring expertise in materials science, physics, electrical engineering, and scientific computing, modeling and simulation. It has also become an important research field for investigating the science and engineering of light-matter interactions that take place on wavelength and subwavelength scales where the nature of the nanostructured matter controls the interactions. In addition, the fast advancements in the computing capabilities, such as parallel computing, also become as a critical element for investigating advanced nanophotonic devices. This role has taken on even greater urgency with the scale-down of device dimensions, and the design for these devices require extensive memory and extremely long core hours. Thus distributed computing platforms associated with parallel computing are required for faster designs processes. Scientific parallel computing constructs mathematical models and quantitative analysis techniques, and uses the computing machines to analyze and solve otherwise intractable scientific challenges. In particular, parallel computing are forms of computation operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently. In this dissertation, we report a series of new nanophotonic developments using the advanced parallel computing techniques. The applications include the structure optimizations at the nanoscale to control both the electromagnetic response of materials, and to manipulate nanoscale structures for enhanced field concentration, which enable breakthroughs in imaging, sensing systems (chapter 3 and 4) and improve the spatial-temporal resolutions of spectroscopies (chapter 5). We also report the investigations on the confinement study of optical-matter interactions at the quantum mechanical regime, where the size-dependent novel properties enhanced a wide range of technologies from the tunable and efficient light sources, detectors, to other nanophotonic elements with enhanced functionality (chapter 6 and 7).

  1. Quasi-coarse-grained dynamics: modelling of metallic materials at mesoscales

    NASA Astrophysics Data System (ADS)

    Dongare, Avinash M.

    2014-12-01

    A computationally efficient modelling method called quasi-coarse-grained dynamics (QCGD) is developed to expand the capabilities of molecular dynamics (MD) simulations to model behaviour of metallic materials at the mesoscales. This mesoscale method is based on solving the equations of motion for a chosen set of representative atoms from an atomistic microstructure and using scaling relationships for the atomic-scale interatomic potentials in MD simulations to define the interactions between representative atoms. The scaling relationships retain the atomic-scale degrees of freedom and therefore energetics of the representative atoms as would be predicted in MD simulations. The total energetics of the system is retained by scaling the energetics and the atomic-scale degrees of freedom of these representative atoms to account for the missing atoms in the microstructure. This scaling of the energetics renders improved time steps for the QCGD simulations. The success of the QCGD method is demonstrated by the prediction of the structural energetics, high-temperature thermodynamics, deformation behaviour of interfaces, phase transformation behaviour, plastic deformation behaviour, heat generation during plastic deformation, as well as the wave propagation behaviour, as would be predicted using MD simulations for a reduced number of representative atoms. The reduced number of atoms and the improved time steps enables the modelling of metallic materials at the mesoscale in extreme environments.

  2. Brief Assessment of Motor Function: Content Validity and Reliability of the Upper Extremity Gross Motor Scale

    ERIC Educational Resources Information Center

    Cintas, Holly Lea; Parks, Rebecca; Don, Sarah; Gerber, Lynn

    2011-01-01

    Content validity and reliability of the Brief Assessment of Motor Function (BAMF) Upper Extremity Gross Motor Scale (UEGMS) were evaluated in this prospective, descriptive study. The UEGMS is one of five BAMF ordinal scales designed for quick documentation of gross, fine, and oral motor skill levels. Designed to be independent of age and…

  3. Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Brodland, G. Wayne

    2008-03-01

    The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10 239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.

  4. Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated.

    PubMed

    Chen, Xiaoguang; Brodland, G Wayne

    2008-04-11

    The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10,239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.

  5. Variations of trends of indicators describing complex systems: Change of scaling precursory to extreme events

    NASA Astrophysics Data System (ADS)

    Keilis-Borok, V. I.; Soloviev, A. A.

    2010-09-01

    Socioeconomic and natural complex systems persistently generate extreme events also known as disasters, crises, or critical transitions. Here we analyze patterns of background activity preceding extreme events in four complex systems: economic recessions, surges in homicides in a megacity, magnetic storms, and strong earthquakes. We use as a starting point the indicators describing the system's behavior and identify changes in an indicator's trend. Those changes constitute our background events (BEs). We demonstrate a premonitory pattern common to all four systems considered: relatively large magnitude BEs become more frequent before extreme event. A premonitory change of scaling has been found in various models and observations. Here we demonstrate this change in scaling of uniformly defined BEs in four real complex systems, their enormous differences notwithstanding.

  6. Extremely Low Operating Current Resistive Memory Based on Exfoliated 2D Perovskite Single Crystals for Neuromorphic Computing.

    PubMed

    Tian, He; Zhao, Lianfeng; Wang, Xuefeng; Yeh, Yao-Wen; Yao, Nan; Rand, Barry P; Ren, Tian-Ling

    2017-12-26

    Extremely low energy consumption neuromorphic computing is required to achieve massively parallel information processing on par with the human brain. To achieve this goal, resistive memories based on materials with ionic transport and extremely low operating current are required. Extremely low operating current allows for low power operation by minimizing the program, erase, and read currents. However, materials currently used in resistive memories, such as defective HfO x , AlO x , TaO x , etc., cannot suppress electronic transport (i.e., leakage current) while allowing good ionic transport. Here, we show that 2D Ruddlesden-Popper phase hybrid lead bromide perovskite single crystals are promising materials for low operating current nanodevice applications because of their mixed electronic and ionic transport and ease of fabrication. Ionic transport in the exfoliated 2D perovskite layer is evident via the migration of bromide ions. Filaments with a diameter of approximately 20 nm are visualized, and resistive memories with extremely low program current down to 10 pA are achieved, a value at least 1 order of magnitude lower than conventional materials. The ionic migration and diffusion as an artificial synapse is realized in the 2D layered perovskites at the pA level, which can enable extremely low energy neuromorphic computing.

  7. Evaluation of empirical relationships between extreme rainfall and daily maximum temperature in Australia

    NASA Astrophysics Data System (ADS)

    Herath, Sujeewa Malwila; Sarukkalige, Ranjan; Nguyen, Van Thanh Van

    2018-01-01

    Understanding the relationships between extreme daily and sub-daily rainfall events and their governing factors is important in order to analyse the properties of extreme rainfall events in a changing climate. Atmospheric temperature is one of the dominant climate variables which has a strong relationship with extreme rainfall events. In this study, a temperature-rainfall binning technique is used to evaluate the dependency of extreme rainfall on daily maximum temperature. The Clausius-Clapeyron (C-C) relation was found to describe the relationship between daily maximum temperature and a range of rainfall durations from 6 min up to 24 h for seven Australian weather stations, the stations being located in Adelaide, Brisbane, Canberra, Darwin, Melbourne, Perth and Sydney. The analysis shows that the rainfall - temperature scaling varies with location, temperature and rainfall duration. The Darwin Airport station shows a negative scaling relationship, while the other six stations show a positive relationship. To identify the trend in scaling relationship over time the same analysis is conducted using data covering 10 year periods. Results indicate that the dependency of extreme rainfall on temperature also varies with the analysis period. Further, this dependency shows an increasing trend for more extreme short duration rainfall and a decreasing trend for average long duration rainfall events at most stations. Seasonal variations of the scale changing trends were analysed by categorizing the summer and autumn seasons in one group and the winter and spring seasons in another group. Most of 99th percentile of 6 min, 1 h and 24 h rain durations at Perth, Melbourne and Sydney stations show increasing trend for both groups while Adelaide and Darwin show decreasing trend. Furthermore, majority of scaling trend of 50th percentile are decreasing for both groups.

  8. Variable complexity online sequential extreme learning machine, with applications to streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.

    2017-12-01

    In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.

  9. Renormalization-group theory for finite-size scaling in extreme statistics

    NASA Astrophysics Data System (ADS)

    Györgyi, G.; Moloney, N. R.; Ozogány, K.; Rácz, Z.; Droz, M.

    2010-04-01

    We present a renormalization-group (RG) approach to explain universal features of extreme statistics applied here to independent identically distributed variables. The outlines of the theory have been described in a previous paper, the main result being that finite-size shape corrections to the limit distribution can be obtained from a linearization of the RG transformation near a fixed point, leading to the computation of stable perturbations as eigenfunctions. Here we show details of the RG theory which exhibit remarkable similarities to the RG known in statistical physics. Besides the fixed points explaining universality, and the least stable eigendirections accounting for convergence rates and shape corrections, the similarities include marginally stable perturbations which turn out to be generic for the Fisher-Tippett-Gumbel class. Distribution functions containing unstable perturbations are also considered. We find that, after a transitory divergence, they return to the universal fixed line at the same or at a different point depending on the type of perturbation.

  10. Extreme-volatility dynamics in crude oil markets

    NASA Astrophysics Data System (ADS)

    Jiang, Xiong-Fei; Zheng, Bo; Qiu, Tian; Ren, Fei

    2017-02-01

    Based on concepts and methods from statistical physics, we investigate extreme-volatility dynamics in the crude oil markets, using the high-frequency data from 2006 to 2010 and the daily data from 1986 to 2016. The dynamic relaxation of extreme volatilities is described by a power law, whose exponents usually depend on the magnitude of extreme volatilities. In particular, the relaxation before and after extreme volatilities is time-reversal symmetric at the high-frequency time scale, but time-reversal asymmetric at the daily time scale. This time-reversal asymmetry is mainly induced by exogenous events. However, the dynamic relaxation after exogenous events exhibits the same characteristics as that after endogenous events. An interacting herding model both with and without exogenous driving forces could qualitatively describe the extreme-volatility dynamics.

  11. Shock-induced mechanochemistry in heterogeneous reactive powder mixtures

    NASA Astrophysics Data System (ADS)

    Gonzales, Manny; Gurumurthy, Ashok; Kennedy, Gregory; Neel, Christopher; Gokhale, Arun; Thadhani, Naresh

    The bulk response of compacted powder mixtures subjected to high-strain-rate loading conditions in various configurations is manifested from behavior at the meso-scale. Simulations at the meso-scale can provide an additional confirmation of the possible origins of the observed response. This work investigates the bulk dynamic response of Ti +B +Al reactive powder mixtures under two extreme loading configurations - uniaxial stress and strain loading - leveraging highly-resolved in-situ measurements and meso-scale simulations. Modified rod-on-anvil impact tests on a reactive pellet demonstrate an optimized stoichiometry promoting reaction in Ti +B +Al. Encapsulated powders subjected to shock compression via flyer plate tests provide possible evidence of a shock-induced reaction at high pressures. Meso-scale simulations of the direct experimental configurations employing highly-resolved microstructural features of the Ti +B compacted mixture show complex inhomogeneous deformation responses and reveal the importance of meso-scale features such as particle size and morphology and their effects on the measured response. Funding is generously provided by DTRA through Grant No. HDTRA1-10-1-0038 (Dr. Su Peiris - Program Manager) and by the SMART (AFRL Wright Patterson AFB) and NDSEG fellowships (High Performance Computing and Modernization Office).

  12. Foundational Tools for Petascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Barton

    2014-05-19

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building toolsmore » and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.« less

  13. Final report for “Extreme-scale Algorithms and Solver Resilience”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, William Douglas

    2017-06-30

    This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility.more » We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.« less

  14. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  15. Experience Paper: Software Engineering and Community Codes Track in ATPESC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Anshu; Riley, Katherine M.

    Argonne Training Program in Extreme Scale Computing (ATPESC) was started by the Argonne National Laboratory with the objective of expanding the ranks of better prepared users of high performance computing (HPC) machines. One of the unique aspects of the program was inclusion of software engineering and community codes track. The inclusion was motivated by the observation that the projects with a good scientific and software process were better able to meet their scientific goals. In this paper we present our experience of running the software track from the beginning of the program until now. We discuss the motivations, the reception,more » and the evolution of the track over the years. We welcome discussion and input from the community to enhance the track in ATPESC, and also to facilitate inclusion of similar tracks in other HPC oriented training programs.« less

  16. Highlights of X-Stack ExM Deliverable: MosaStore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ripeanu, Matei

    2016-07-20

    This brief report highlights the experience gained with MosaStore, an exploratory part of the X-Stack project “ExM: System support for extreme-scale, many-task applications”. The ExM project proposed to use concurrent workflows supported by the Swift language and runtime as an innovative programming model to exploit parallelism in exascale computers. MosaStore aims to support this endeavor by improving storage support for workflow-based applications, more precisely by exploring the gains that can be obtained from co-designing the storage system and the workflow runtime engine. MosaStore has been developed primarily at the University of British Columbia.

  17. A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms

    NASA Astrophysics Data System (ADS)

    Hasbestan, Jaber J.; Senocak, Inanc

    2017-12-01

    Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.

  18. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  19. Development of Creep-Resistant, Alumina-Forming Ferrous Alloys for High-Temperature Structural Use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Yukinori; Brady, Michael P.; Muralidharan, Govindarajan

    This paper overviews recent advances in developing novel alloy design concepts of creep-resistant, alumina-forming Fe-base alloys, including both ferritic and austenitic steels, for high-temperature structural applications in fossil-fired power generation systems. Protective, external alumina-scales offer improved oxidation resistance compared to chromia-scales in steam-containing environments at elevated temperatures. Alloy design utilizes computational thermodynamic tools with compositional guidelines based on experimental results accumulated in the last decade, along with design and control of the second-phase precipitates to maximize high-temperature strengths. The alloys developed to date, including ferritic (Fe-Cr-Al-Nb-W base) and austenitic (Fe-Cr-Ni-Al-Nb base) alloys, successfully incorporated the balanced properties of steam/water vapor-oxidationmore » and/or ash-corrosion resistance and improved creep strength. Development of cast alumina-forming austenitic (AFA) stainless steel alloys is also in progress with successful improvement of higher temperature capability targeting up to ~1100°C. Current alloy design approach and developmental efforts with guidance of computational tools were found to be beneficial for further development of the new heat resistant steel alloys for various extreme environments.« less

  20. Atomic-scale phase composition through multivariate statistical analysis of atom probe tomography data.

    PubMed

    Keenan, Michael R; Smentkowski, Vincent S; Ulfig, Robert M; Oltman, Edward; Larson, David J; Kelly, Thomas F

    2011-06-01

    We demonstrate for the first time that multivariate statistical analysis techniques can be applied to atom probe tomography data to estimate the chemical composition of a sample at the full spatial resolution of the atom probe in three dimensions. Whereas the raw atom probe data provide the specific identity of an atom at a precise location, the multivariate results can be interpreted in terms of the probabilities that an atom representing a particular chemical phase is situated there. When aggregated to the size scale of a single atom (∼0.2 nm), atom probe spectral-image datasets are huge and extremely sparse. In fact, the average spectrum will have somewhat less than one total count per spectrum due to imperfect detection efficiency. These conditions, under which the variance in the data is completely dominated by counting noise, test the limits of multivariate analysis, and an extensive discussion of how to extract the chemical information is presented. Efficient numerical approaches to performing principal component analysis (PCA) on these datasets, which may number hundreds of millions of individual spectra, are put forward, and it is shown that PCA can be computed in a few seconds on a typical laptop computer.

  1. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  2. Modeling Microalgae Productivity in Industrial-Scale Vertical Flat Panel Photobioreactors.

    PubMed

    Endres, Christian H; Roth, Arne; Brück, Thomas B

    2018-05-01

    Potentially achievable biomass yields are a decisive performance indicator for the economic viability of mass cultivation of microalgae. In this study, a computer model has been developed and applied to estimate the productivity of microalgae for large-scale outdoor cultivation in vertical flat panel photobioreactors. Algae growth is determined based on simulations of the reactor temperature and light distribution. Site-specific weather and irradiation data are used for annual yield estimations in six climate zones. Shading and reflections between opposing panels and between panels and the ground are dynamically computed based on the reactor geometry and the position of the sun. The results indicate that thin panels (≤0.05 m) are best suited for the assumed cell density of 2 g L -1 and that reactor panels should face in north-south direction. Panel spacings of 0.4-0.75 m at a panel height of 1 m appear most suitable for commercial applications. Under these preconditions, yields of around 10 kg m -2 a -1 are possible for most locations in the U.S. Only in hot climates significantly lower yields have to be expected, as extreme reactor temperatures limit overall productivity.

  3. Rolex: Resilience-oriented language extensions for extreme-scale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Robert F.; Hukerikar, Saurabh

    Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less

  4. Rolex: Resilience-oriented language extensions for extreme-scale systems

    DOE PAGES

    Lucas, Robert F.; Hukerikar, Saurabh

    2016-05-26

    Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less

  5. Dendrites, deep learning, and sequences in the hippocampus.

    PubMed

    Bhalla, Upinder S

    2017-10-12

    The hippocampus places us both in time and space. It does so over remarkably large spans: milliseconds to years, and centimeters to kilometers. This works for sensory representations, for memory, and for behavioral context. How does it fit in such wide ranges of time and space scales, and keep order among the many dimensions of stimulus context? A key organizing principle for a wide sweep of scales and stimulus dimensions is that of order in time, or sequences. Sequences of neuronal activity are ubiquitous in sensory processing, in motor control, in planning actions, and in memory. Against this strong evidence for the phenomenon, there are currently more models than definite experiments about how the brain generates ordered activity. The flip side of sequence generation is discrimination. Discrimination of sequences has been extensively studied at the behavioral, systems, and modeling level, but again physiological mechanisms are fewer. It is against this backdrop that I discuss two recent developments in neural sequence computation, that at face value share little beyond the label "neural." These are dendritic sequence discrimination, and deep learning. One derives from channel physiology and molecular signaling, the other from applied neural network theory - apparently extreme ends of the spectrum of neural circuit detail. I suggest that each of these topics has deep lessons about the possible mechanisms, scales, and capabilities of hippocampal sequence computation. © 2017 Wiley Periodicals, Inc.

  6. The Relationship between Spatial and Temporal Magnitude Estimation of Scientific Concepts at Extreme Scales

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Lee, H.

    2010-01-01

    Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.

  7. Global Weirding? - Using Very Large Ensembles and Extreme Value Theory to assess Changes in Extreme Weather Events Today

    NASA Astrophysics Data System (ADS)

    Otto, F. E. L.; Mitchell, D.; Sippel, S.; Black, M. T.; Dittus, A. J.; Harrington, L. J.; Mohd Saleh, N. H.

    2014-12-01

    A shift in the distribution of socially-relevant climate variables such as daily minimum winter temperatures and daily precipitation extremes, has been attributed to anthropogenic climate change for various mid-latitude regions. However, while there are many process-based arguments suggesting also a change in the shape of these distributions, attribution studies demonstrating this have not currently been undertaken. Here we use a very large initial condition ensemble of ~40,000 members simulating the European winter 2013/2014 using the distributed computing infrastructure under the weather@home project. Two separate scenarios are used:1. current climate conditions, and 2. a counterfactual scenario of "world that might have been" without anthropogenic forcing. Specifically focusing on extreme events, we assess how the estimated parameters of the Generalized Extreme Value (GEV) distribution vary depending on variable-type, sampling frequency (daily, monthly, …) and geographical region. We find that the location parameter changes for most variables but, depending on the region and variables, we also find significant changes in scale and shape parameters. The very large ensemble allows, furthermore, to assess whether such findings in the fitted GEV distributions are consistent with an empirical analysis of the model data, and whether the most extreme data still follow a known underlying distribution that in a small sample size might otherwise be thought of as an out-lier. The ~40,000 member ensemble is simulated using 12 different SST patterns (1 'observed', and 11 best guesses of SSTs with no anthropogenic warming). The range in SSTs, along with the corresponding changings in the NAO and high-latitude blocking inform on the dynamics governing some of these extreme events. While strong tele-connection patterns are not found in this particular experiment, the high number of simulated extreme events allows for a more thorough analysis of the dynamics than has been performed before. Therefore, combining extreme value theory with very large ensemble simulations allows us to understand the dynamics of changes in extreme events which is not possible just using the former but also shows in which cases statistics combined with smaller ensembles give as valid results as very large initial conditions.

  8. Didactic Dissonance: Teacher Roles in Computer Gaming Situations in Kindergartens

    ERIC Educational Resources Information Center

    Vangsnes, Vigdis; Økland, Nils Tore Gram

    2015-01-01

    In computer gaming situations in kindergartens, the pre-school teacher's function can be viewed in a continuum. At one extreme is the teacher who takes an intervening role and at the other extreme is the teacher who chooses to restrict herself/himself to an organising or distal role. This study shows that both the intervening position and the…

  9. Changes and Attribution of Extreme Precipitation in Climate Models: Subdaily and Daily Scales

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Villarini, G.; Scoccimarro, E.; Vecchi, G. A.

    2017-12-01

    Extreme precipitation events are responsible for numerous hazards, including flooding, soil erosion, and landslides. Because of their significant socio-economic impacts, the attribution and projection of these events is of crucial importance to improve our response, mitigation and adaptation strategies. Here we present results from our ongoing work.In terms of attribution, we use idealized experiments [pre-industrial control experiment (PI) and 1% per year increase (1%CO2) in atmospheric CO2] from ten general circulation models produced under the Coupled Model Intercomparison Project Phase 5 (CMIP5) and the fraction of attributable risk to examine the CO2 effects on extreme precipitation at the sub-daily and daily scales. We find that the increased CO2 concentration substantially increases the odds of the occurrence of sub-daily precipitation extremes compared to the daily scale in most areas of the world, with the exception of some regions in the sub-tropics, likely in relation to the subsidence of the Hadley Cell. These results point to the large role that atmospheric CO2 plays in extreme precipitation under an idealized framework. Furthermore, we investigate the changes in extreme precipitation events with the Community Earth System Model (CESM) climate experiments using the scenarios consistent with the 1.5°C and 2°C temperature targets. We find that the frequency of annual extreme precipitation at a global scale increases in both 1.5°C and 2°C scenarios until around 2070, after which the magnitudes of the trend become much weaker or even negative. Overall, the frequency of global annual extreme precipitation is similar between 1.5°C and 2°C for the period 2006-2035, and the changes in extreme precipitation in individual seasons are consistent with those for the entire year. The frequency of extreme precipitation in the 2°C experiments is higher than for the 1.5°C experiment after the late 2030s, particularly for the period 2071-2100.

  10. Role of absorbing aerosols on hot extremes in India in a GCM

    NASA Astrophysics Data System (ADS)

    Mondal, A.; Sah, N.; Venkataraman, C.; Patil, N.

    2017-12-01

    Temperature extremes and heat waves in North-Central India during the summer months of March through June are known for causing significant impact in terms of human health, productivity and mortality. While greenhouse gas-induced global warming is generally believed to intensify the magnitude and frequency of such extremes, aerosols are usually associated with an overall cooling, by virtue of their dominant radiation scattering nature, in most world regions. Recently, large-scale atmospheric conditions leading to heat wave and extreme temperature conditions have been analysed for the North-Central Indian region. However, the role of absorbing aerosols, including black carbon and dust, is still not well understood, in mediating hot extremes in the region. In this study, we use 30-year simulations from a chemistry-coupled atmosphere-only General Circulation Model (GCM), ECHAM6-HAM2, forced with evolving aerosol emissions in an interactive aerosol module, along with observed sea surface temperatures, to examine large-scale and mesoscale conditions during hot extremes in India. The model is first validated with observed gridded temperature and reanalysis data, and is found to represent observed variations in temperature in the North-Central region and concurrent large-scale atmospheric conditions during high temperature extremes realistically. During these extreme events, changes in near surface properties include a reduction in single scattering albedo and enhancement in short-wave solar heating rate, compared to climatological conditions. This is accompanied by positive anomalies of black carbon and dust aerosol optical depths. We conclude that the large-scale atmospheric conditions such as the presence of anticyclones and clear skies, conducive to heat waves and high temperature extremes, are exacerbated by absorbing aerosols in North-Central India. Future air quality regulations are expected to reduce sulfate particles and their masking of GHG warming. It is concurrently important to mitigate emissions of warming black carbon particles, to manage future climate change-induced hot extremes.

  11. Relationship between diffusion tensor fractional anisotropy and motor outcome in patients with hemiparesis after corona radiata infarct.

    PubMed

    Koyama, Tetsuo; Marumoto, Kohei; Miyake, Hiroji; Domen, Kazuhisa

    2013-11-01

    This study examined the relationship between fractional anisotropy (FA) values of magnetic resonance-diffusion tensor imaging (DTI) and motor outcome (1 month after onset) in 15 patients with hemiparesis after ischemic stroke of corona radiata lesions. DTI data were obtained on days 14-18. FA values within the cerebral peduncle were analyzed using a computer-automated method. Motor outcome of hemiparesis was evaluated according to Brunnstrom stage (BRS; 6-point scale: severe to normal) for separate shoulder/elbow/forearm, wrist/hand, and lower extremity functions. The ratio of FA values in the affected hemisphere to those in the unaffected hemisphere (rFA) was assessed in relation to the BRS data (Spearman rank correlation test, P<.05). rFA values ranged from .715 to 1.002 (median=.924). BRS ranged from 1 to 6 (median=4) for shoulder/elbow/forearm, from 1 to 6 (median=5) for wrist/hand, and from 2 to 6 (median=4) for the lower extremities. Analysis revealed statistically significant relationships between rFA and upper extremity functions (correlation coefficient=.679 for shoulder/elbow/forearm and .706 for wrist/hand). Although slightly less evident, the relationship between rFA and lower extremity function was also statistically significant (correlation coefficient=.641). FA values within the cerebral peduncle are moderately associated with the outcome of both upper and lower extremity functions, suggesting that DTI may be applicable for outcome prediction in stroke patients with corona radiata infarct. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  12. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less

  13. Synoptic-scale and mesoscale environments conducive to forest fires during the October 2003 extreme fire event in Southern California

    Treesearch

    Chenjie Huang; Y.L. Lin; M.L. Kaplan; Joseph J.J. Charney

    2009-01-01

    This study has employed both observational data and numerical simulation results to diagnose the synoptic-scale and mesoscale environments conducive to forest fires during the October 2003 extreme fire event in southern California. A three-stage process is proposed to illustrate the coupling of the synoptic-scale forcing that is evident from the observations,...

  14. Extreme hydronephrosis due to uretropelvic junction obstruction in infant (case report).

    PubMed

    Krzemień, Grażyna; Szmigielska, Agnieszka; Bombiński, Przemysław; Barczuk, Marzena; Biejat, Agnieszka; Warchoł, Stanisław; Dudek-Warchoł, Teresa

    2016-01-01

    Hydronephrosis is the one of the most common congenital abnormalities of urinary tract. The left kidney is more commonly affected than the right side and is more common in males. To determine the role of ultrasonography, renal dynamic scintigraphy and lowerdose computed tomography urography in preoperative diagnostic workup of infant with extreme hydronephrosis. We presented the boy with antenatally diagnosed hydronephrosis. In serial, postnatal ultrasonography, renal scintigraphy and computed tomography urography we observed slightly declining function in the dilated kidney and increasing pelvic dilatation. Pyeloplasty was performed at the age of four months with good result. Results of ultrasonography and renal dynamic scintigraphy in child with extreme hydronephrosis can be difficult to asses, therefore before the surgical procedure a lower-dose computed tomography urography should be performed.

  15. Entropy, extremality, euclidean variations, and the equations of motion

    NASA Astrophysics Data System (ADS)

    Dong, Xi; Lewkowycz, Aitor

    2018-01-01

    We study the Euclidean gravitational path integral computing the Rényi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton's constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.

  16. The results of bone deformity correction using a spider frame with web-based software for lower extremity long bone deformities.

    PubMed

    Tekin, Ali Çağrı; Çabuk, Haluk; Dedeoğlu, Süleyman Semih; Saygılı, Mehmet Selçuk; Adaş, Müjdat; Esenyel, Cem Zeki; Büyükkurt, Cem Dinçay; Tonbul, Murat

    2016-03-22

    To present the functional and radiological results and evaluate the effectiveness of a computer-assisted external fixator (spider frame) in patients with lower extremity shortness and deformity. The study comprised 17 patients (14 male, 3 female) who were treated for lower extremity long bone deformity and shortness between 2012 and 2015 using a spider frame. The procedure's level of difficulty was determined preoperatively using the Paley Scale. Postoperatively, the results for the patients who underwent tibial operations were evaluated using the Paley criteria modified by ASAMI, and the results for the patients who underwent femoral operations were evaluated according to the Paley scoring system. The evaluations were made by calculating the External Fixator and Distraction indexes. The mean age of the patients was 24.58 years (range, 5-51 years). The spider frame was applied to the femur in 10 patients and to the tibia in seven. The mean follow-up period was 15 months (range, 6-31 months) from the operation day, and the mean amount of lengthening was 3.0 cm (range, 1-6 cm). The mean duration of fixator application was 202.7 days (range, 104-300 days). The mean External Fixator Index was 98 days/cm (range, 42-265 days/cm). The mean Distraction Index was 10.49 days/cm (range, 10-14 days/cm). The computer-assisted external fixator system (spider frame) achieves single-stage correction in cases of both deformity and shortness. The system can be applied easily, and because of its high-tech software, it offers the possibility of postoperative treatment of the deformity.

  17. Predictive characterization of aging and degradation of reactor materials in extreme environments. Final report, December 20, 2013 - September 20, 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu, Jianmin

    Understanding of reactor material behavior in extreme environments is vital not only to the development of new materials for the next generation nuclear reactors, but also to the extension of the operating lifetimes of the current fleet of nuclear reactors. To this end, this project conducted a suite of unique experimental techniques, augmented by a mesoscale computational framework, to understand and predict the long-term effects of irradiation, temperature, and stress on material microstructures and their macroscopic behavior. The experimental techniques and computational tools were demonstrated on two distinctive types of reactor materials, namely, Zr alloys and high-Cr martensitic steels. Thesemore » materials are chosen as the test beds because they are the archetypes of high-performance reactor materials (cladding, wrappers, ducts, pressure vessel, piping, etc.). To fill the knowledge gaps, and to meet the technology needs, a suite of innovative in situ transmission electron microscopy (TEM) characterization techniques (heating, heavy ion irradiation, He implantation, quantitative small-scale mechanical testing, and various combinations thereof) were developed and used to elucidate and map the fundamental mechanisms of microstructure evolution in both Zr and Cr alloys for a wide range environmental boundary conditions in the thermal-mechanical-irradiation input space. Knowledge gained from the experimental observations of the active mechanisms and the role of local microstructural defects on the response of the material has been incorporated into a mathematically rigorous and comprehensive three-dimensional mesoscale framework capable of accounting for the compositional variation, microstructural evolution and localized deformation (radiation damage) to predict aging and degradation of key reactor materials operating in extreme environments. Predictions from this mesoscale framework were compared with the in situ TEM observations to validate the model.« less

  18. Materials by Design—A Perspective From Atoms to Structures

    PubMed Central

    Buehler, Markus J.

    2013-01-01

    Biological materials are effectively synthesized, controlled, and used for a variety of purposes—in spite of limitations in energy, quality, and quantity of their building blocks. Whereas the chemical composition of materials in the living world plays a some role in achieving functional properties, the way components are connected at different length scales defines what material properties can be achieved, how they can be altered to meet functional requirements, and how they fail in disease states and other extreme conditions. Recent work has demonstrated this by using large-scale computer simulations to predict materials properties from fundamental molecular principles, combined with experimental work and new mathematical techniques to categorize complex structure-property relationships into a systematic framework. Enabled by such categorization, we discuss opportunities based on the exploitation of concepts from distinct hierarchical systems that share common principles in how function is created, linking music to materials science. PMID:24163499

  19. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  20. Some aspects of wind tunnel magnetic suspension systems with special application at large physical scales

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1983-01-01

    Wind tunnel magnetic suspension and balance systems (MSBSs) have so far failed to find application at the large physical scales necessary for the majority of aerodynamic testing. Three areas of technology relevant to such application are investigated. Two variants of the Spanwise Magnet roll torque generation scheme are studied. Spanwise Permanent Magnets are shown to be practical and are experimentally demonstrated. Extensive computations of the performance of the Spanwise Iron Magnet scheme indicate powerful capability, limited principally be electromagnet technology. Aerodynamic testing at extreme attitudes is shown to be practical in relatively conventional MSBSs. Preliminary operation of the MSBS over a wide range of angles of attack is demonstrated. The impact of a requirement for highly reliable operation on the overall architecture of Large MSBSs is studied and it is concluded that system cost and complexity need not be seriously increased.

  1. Machine Learning Toolkit for Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-03-31

    Support Vector Machines (SVM) is a popular machine learning technique, which has been applied to a wide range of domains such as science, finance, and social networks for supervised learning. MaTEx undertakes the challenge of designing a scalable parallel SVM training algorithm for large scale systems, which includes commodity multi-core machines, tightly connected supercomputers and cloud computing systems. Several techniques are proposed for improved speed and memory space usage including adaptive and aggressive elimination of samples for faster convergence , and sparse format representation of data samples. Several heuristics for earliest possible to lazy elimination of non-contributing samples are consideredmore » in MaTEx. In many cases, where an early sample elimination might result in a false positive, low overhead mechanisms for reconstruction of key data structures are proposed. The proposed algorithm and heuristics are implemented and evaluated on various publicly available datasets« less

  2. Polymer translocation under a pulling force: Scaling arguments and threshold forces

    NASA Astrophysics Data System (ADS)

    Menais, Timothée

    2018-02-01

    DNA translocation through nanopores is one of the most promising strategies for next-generation sequencing technologies. Most experimental and numerical works have focused on polymer translocation biased by electrophoresis, where a pulling force acts on the polymer within the nanopore. An alternative strategy, however, is emerging, which uses optical or magnetic tweezers. In this case, the pulling force is exerted directly at one end of the polymer, which strongly modifies the translocation process. In this paper, we report numerical simulations of both linear and structured (mimicking DNA) polymer models, simple enough to allow for a statistical treatment of the pore structure effects on the translocation time probability distributions. Based on extremely extended computer simulation data, we (i) propose scaling arguments for an extension of the predicted translocation times τ ˜N2F-1 over the moderate forces range and (ii) analyze the effect of pore size and polymer structuration on translocation times τ .

  3. Convective phenomena at high resolution over Europe and the Mediterranean. The join EURO-CORDEX and Med-CORDEX flagship pilot study

    NASA Astrophysics Data System (ADS)

    Coppola, E.; Sobolowski, S.

    2017-12-01

    The join EURO-CORDEX and Med-CORDEX Flagship Pilot Study dedicated to the frontier research of using convective permitting (CP) models to address the impact of human induced climate change on convection, has been recently approved and the scientific community behind the project is made of 30 different scientific European institutes. The motivations for such a challenge is the availability of large field campaigns dedicated to the study of heavy precipitation events; the increased computing capacity and model developments; the emerging trend signals in extreme precipitation at daily and mainly sub-daily time scale in the Mediterranean and Alpine regions and the priority of convective extreme events under the WCRP Grand Challenge on climate extremes. The main objective of this effort are to investigate convective-scale events, their processes and changes in a few key regions of Europe and the Mediterranean using CP RCMs, statistical models and available observations. To provide a collective assessment of the modeling capacity at CP scale and to shape a coherent and collective assessment of the consequences of climate change on convective event impacts at local to regional scales. The scientific aims of this research are to investigate how the convective events and the damaging phenomena associated with them will respond to changing climate conditions in different European climates zone. To understand if an improved representation of convective phenomena at convective permitting scales will lead to upscaled added value and finally to assess the possibility to replace these costly convection-permitting experiments with statistical approaches like "convection emulators". The common initial domain will be an extended Alpine domain and all the groups will simulate a minimum of 10 years period with ERA-interim boundary conditions, with the possibility of other two sub-domains one in the Northwest continental Europe and another in the Southeast Mediterranean. The scenario simulations will be completed for three different 10 years time slices one in the historical period, one in the near future and the last one in the far future for the RCP8.5 scenario. The first target of this scientific community is to have an ensemble of 1-2 years ERA-interim simulations ready by late 2017 and a set of test cases to use as a pilot study.

  4. Multiscale Computational Analysis of Nitrogen and Oxygen Gas-Phase Thermochemistry in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Bender, Jason D.

    Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for hypersonic CFD. This research was supported by the Department of Energy Computational Science Graduate Fellowship and by the Air Force Office of Scientific Research Multidisciplinary University Research Initiative.

  5. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    NASA Astrophysics Data System (ADS)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  6. Automation Rover for Extreme Environments

    NASA Technical Reports Server (NTRS)

    Sauder, Jonathan; Hilgemann, Evan; Johnson, Michael; Parness, Aaron; Hall, Jeffrey; Kawata, Jessie; Stack, Kathryn

    2017-01-01

    Almost 2,300 years ago the ancient Greeks built the Antikythera automaton. This purely mechanical computer accurately predicted past and future astronomical events long before electronics existed1. Automata have been credibly used for hundreds of years as computers, art pieces, and clocks. However, in the past several decades automata have become less popular as the capabilities of electronics increased, leaving them an unexplored solution for robotic spacecraft. The Automaton Rover for Extreme Environments (AREE) proposes an exciting paradigm shift from electronics to a fully mechanical system, enabling longitudinal exploration of the most extreme environments within the solar system.

  7. Pore-scale mechanisms of gas flow in tight sand reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silin, D.; Kneafsey, T.J.; Ajo-Franklin, J.B.

    2010-11-30

    Tight gas sands are unconventional hydrocarbon energy resource storing large volume of natural gas. Microscopy and 3D imaging of reservoir samples at different scales and resolutions provide insights into the coaredo not significantly smaller in size than conventional sandstones, the extremely dense grain packing makes the pore space tortuous, and the porosity is small. In some cases the inter-granular void space is presented by micron-scale slits, whose geometry requires imaging at submicron resolutions. Maximal Inscribed Spheres computations simulate different scenarios of capillary-equilibrium two-phase fluid displacement. For tight sands, the simulations predict an unusually low wetting fluid saturation threshold, at whichmore » the non-wetting phase becomes disconnected. Flow simulations in combination with Maximal Inscribed Spheres computations evaluate relative permeability curves. The computations show that at the threshold saturation, when the nonwetting fluid becomes disconnected, the flow of both fluids is practically blocked. The nonwetting phase is immobile due to the disconnectedness, while the permeability to the wetting phase remains essentially equal to zero due to the pore space geometry. This observation explains the Permeability Jail, which was defined earlier by others. The gas is trapped by capillarity, and the brine is immobile due to the dynamic effects. At the same time, in drainage, simulations predict that the mobility of at least one of the fluids is greater than zero at all saturations. A pore-scale model of gas condensate dropout predicts the rate to be proportional to the scalar product of the fluid velocity and pressure gradient. The narrowest constriction in the flow path is subject to the highest rate of condensation. The pore-scale model naturally upscales to the Panfilov's Darcy-scale model, which implies that the condensate dropout rate is proportional to the pressure gradient squared. Pressure gradient is the greatest near the matrix-fracture interface. The distinctive two-phase flow properties of tight sand imply that a small amount of gas condensate can seriously affect the recovery rate by blocking gas flow. Dry gas injection, pressure maintenance, or heating can help to preserve the mobility of gas phase. A small amount of water can increase the mobility of gas condensate.« less

  8. Relating rainfall characteristics to cloud top temperatures at different scales

    NASA Astrophysics Data System (ADS)

    Klein, Cornelia; Belušić, Danijel; Taylor, Christopher

    2017-04-01

    Extreme rainfall from mesoscale convective systems (MCS) poses a threat to lives and livelihoods of the West African population through increasingly frequent devastating flooding and loss of crops. However, despite the significant impact of such extreme events, the dominant processes favouring their occurrence are still under debate. In the data-sparse West African region, rainfall radar data from the Tropical Rainfall Measuring Mission (TRMM) gives invaluable information on the distribution and frequency of extreme rainfall. The TRMM 2A25 product provides a 15-year dataset of snapshots of surface rainfall from 2-4 overpasses per day. Whilst this sampling captures the overall rainfall characteristics, it is neither long nor frequent enough to diagnose changes in MCS properties, which may be linked to the trend towards rainfall intensification in the region. On the other hand, Meteosat geostationary satellites provide long-term sub-hourly records of cloud top temperatures, raising the possibility of combining these with the high-quality rainfall data from TRMM. In this study, we relate TRMM 2A25 rainfall to Meteosat Second Generation (MSG) cloud top temperatures, which are available from 2004 at 15 minutes intervals, to get a more detailed picture of the structure of intense rainfall within the life cycle of MCS. We find TRMM rainfall intensities within an MCS to be strongly coupled with MSG cloud top temperatures: the probability for extreme rainfall increases from <10% for minimum temperatures warmer than -40°C to over 70% when temperatures drop below -70°C, confirming the potential in analysing cloud-top temperatures as a proxy for extreme rain. The sheer size of MCS raises the question which scales of sub-cloud structures are more likely to be associated with extreme rain than others. In the end, this information could help to associate scale changes in cloud top temperatures with processes that affect the probability of extreme rain. We use 2D continuous wavelets to decompose cloud top temperatures into power spectra at scales between 15 and 200km. From these, cloud sub-structures are identified as circular areas of respective scale with local power maxima in their centre. These areas are then mapped onto coinciding TRMM rainfall, allowing us to assign rainfall fields to sub-cloud features of different scales. We find a higher probability for extreme rainfall for cloud features above a scale of 30km, with features 100km contributing most to the number of extreme rainfall pixels. Over the average diurnal cycle, the number of smaller cloud features between 15-60km shows an increase between 15 - 1700UTC, gradually developing into larger ones. The maximum of extreme rainfall pixels around 1900UTC coincides with a peak for scales 100km, suggesting a dominant role of these scales for intense rain for the analysed cloud type. Our results demonstrate the suitability of 2D wavelet decomposition for the analysis of sub-cloud structures and their relation to rainfall characteristics, and help us to understand long-term changes in the properties of MCS.

  9. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duque, Earl P.N.; Whitlock, Brad J.

    High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less

  10. A European Flagship Programme on Extreme Computing and Climate

    NASA Astrophysics Data System (ADS)

    Palmer, Tim

    2017-04-01

    In 2016, an outline proposal co-authored by a number of leading climate modelling scientists from around Europe for a (c. 1 billion euro) flagship project on exascale computing and high-resolution global climate modelling was sent to the EU via its Future and Emerging Flagship Technologies Programme. The project is formally entitled "A Flagship European Programme on Extreme Computing and Climate (EPECC)"? In this talk I will outline the reasons why I believe such a project is needed and describe the current status of the project. I will leave time for some discussion.

  11. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    PubMed

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  12. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less

  13. Multiscale Measurement of Extreme Response Style

    ERIC Educational Resources Information Center

    Bolt, Daniel M.; Newton, Joseph R.

    2011-01-01

    This article extends a methodological approach considered by Bolt and Johnson for the measurement and control of extreme response style (ERS) to the analysis of rating data from multiple scales. Specifically, it is shown how the simultaneous analysis of item responses across scales allows for more accurate identification of ERS, and more effective…

  14. Variability of temperature sensitivity of extreme precipitation from a regional-to-local impact scale perspective

    NASA Astrophysics Data System (ADS)

    Schroeer, K.; Kirchengast, G.

    2016-12-01

    Relating precipitation intensity to temperature is a popular approach to assess potential changes of extreme events in a warming climate. Potential increases in extreme rainfall induced hazards, such as flash flooding, serve as motivation. It has not been addressed whether the temperature-precipitation scaling approach is meaningful on a regional to local level, where the risk of climate and weather impact is dealt with. Substantial variability of temperature sensitivity of extreme precipitation has been found that results from differing methodological assumptions as well as from varying climatological settings of the study domains. Two aspects are consistently found: First, temperature sensitivities beyond the expected consistency with the Clausius-Clapeyron (CC) equation are a feature of short-duration, convective, sub-daily to sub-hourly high-percentile rainfall intensities at mid-latitudes. Second, exponential growth ceases or reverts at threshold temperatures that vary from region to region, as moisture supply becomes limited. Analyses of pooled data, or of single or dispersed stations over large areas make it difficult to estimate the consequences in terms of local climate risk. In this study we test the meaningfulness of the scaling approach from an impact scale perspective. Temperature sensitivities are assessed using quantile regression on hourly and sub-hourly precipitation data from 189 stations in the Austrian south-eastern Alpine region. The observed scaling rates vary substantially, but distinct regional and seasonal patterns emerge. High sensitivity exceeding CC-scaling is seen on the 10-minute scale more than on the hourly scale, in storms shorter than 2 hours duration, and in shoulder seasons, but it is not necessarily a significant feature of the extremes. To be impact relevant, change rates need to be linked to absolute rainfall amounts. We show that high scaling rates occur in lower temperature conditions and thus have smaller effect on absolute precipitation intensities. While reporting of mere percentage numbers can be misleading, scaling studies can add value to process understanding on the local scale, if the factors that influence scaling rates are considered from both a methodological and a physical perspective.

  15. Assessing changes in extreme river flow regulation from non-stationarity in hydrological scaling laws

    NASA Astrophysics Data System (ADS)

    Rodríguez, Estiven; Salazar, Juan Fernando; Villegas, Juan Camilo; Mercado-Bettín, Daniel

    2018-07-01

    Extreme flows are key components of river flow regimes that affect manifold hydrological, geomorphological and ecological processes with societal relevance. One fundamental characteristic of extreme flows in river basins is that they exhibit scaling properties which can be identified through scaling (power) laws. Understanding the physical mechanisms behind such scaling laws is a continuing challenge in hydrology, with potential implications for the prediction of river flow regimes in a changing environment and ungauged basins. After highlighting that the scaling properties are sensitive to environmental change, we develop a physical interpretation of how temporal changes in scaling exponents relate to the capacity of river basins to regulate extreme river flows. Regulation is defined here as the basins' capacity to either dampen high flows or to enhance low flows. Further, we use this framework to infer temporal changes in the regulation capacity of five large basins in tropical South America. Our results indicate that, during the last few decades, the Amazon river basin has been reducing its capacity to enhance low flows, likely as a consequence of pronounced environmental change in its south and south-eastern sub-basins. The proposed framework is widely applicable to different basins, and provides foundations for using scaling laws as empirical tools for inferring temporal changes of hydrological regulation, particularly relevant for identifying and managing hydrological consequences of environmental change.

  16. 2009 fault tolerance for extreme-scale computing workshop, Albuquerque, NM - March 19-20, 2009.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, D. S.; Daly, J.; DeBardeleben, N.

    2009-02-01

    This is a report on the third in a series of petascale workshops co-sponsored by Blue Waters and TeraGrid to address challenges and opportunities for making effective use of emerging extreme-scale computing. This workshop was held to discuss fault tolerance on large systems for running large, possibly long-running applications. The main point of the workshop was to have systems people, middleware people (including fault-tolerance experts), and applications people talk about the issues and figure out what needs to be done, mostly at the middleware and application levels, to run such applications on the emerging petascale systems, without having faults causemore » large numbers of application failures. The workshop found that there is considerable interest in fault tolerance, resilience, and reliability of high-performance computing (HPC) systems in general, at all levels of HPC. The only way to recover from faults is through the use of some redundancy, either in space or in time. Redundancy in time, in the form of writing checkpoints to disk and restarting at the most recent checkpoint after a fault that cause an application to crash/halt, is the most common tool used in applications today, but there are questions about how long this can continue to be a good solution as systems and memories grow faster than I/O bandwidth to disk. There is interest in both modifications to this, such as checkpoints to memory, partial checkpoints, and message logging, and alternative ideas, such as in-memory recovery using residues. We believe that systematic exploration of these ideas holds the most promise for the scientific applications community. Fault tolerance has been an issue of discussion in the HPC community for at least the past 10 years; but much like other issues, the community has managed to put off addressing it during this period. There is a growing recognition that as systems continue to grow to petascale and beyond, the field is approaching the point where we don't have any choice but to address this through R&D efforts.« less

  17. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  18. Optimizing high performance computing workflow for protein functional annotation

    PubMed Central

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-01-01

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296

  19. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Developing a Framework for Seamless Prediction of Sub-Seasonal to Seasonal Extreme Precipitation Events in the United States.

    NASA Astrophysics Data System (ADS)

    Rosendahl, D. H.; Ćwik, P.; Martin, E. R.; Basara, J. B.; Brooks, H. E.; Furtado, J. C.; Homeyer, C. R.; Lazrus, H.; Mcpherson, R. A.; Mullens, E.; Richman, M. B.; Robinson-Cook, A.

    2017-12-01

    Extreme precipitation events cause significant damage to homes, businesses, infrastructure, and agriculture, as well as many injures and fatalities as a result of fast-moving water or waterborne diseases. In the USA, these natural hazard events claimed the lives of more than 300 people during 2015 - 2016 alone, with total damage reaching $24.4 billion. Prior studies of extreme precipitation events have focused on the sub-daily to sub-weekly timeframes. However, many decisions for planning, preparing and resilience-building require sub-seasonal to seasonal timeframes (S2S; 14 to 90 days), but adequate forecasting tools for prediction do not exist. Therefore, the goal of this newly funded project is an enhancement in understanding of the large-scale forcing and dynamics of S2S extreme precipitation events in the United States, and improved capability for modeling and predicting such events. Here, we describe the project goals, objectives, and research activities that will take place over the next 5 years. In this project, a unique team of scientists and stakeholders will identify and understand weather and climate processes connected with the prediction of S2S extreme precipitation events by answering these research questions: 1) What are the synoptic patterns associated with, and characteristic of, S2S extreme precipitation evens in the contiguous U.S.? 2) What role, if any, do large-scale modes of climate variability play in modulating these events? 3) How predictable are S2S extreme precipitation events across temporal scales? 4) How do we create an informative prediction of S2S extreme precipitation events for policymaking and planing? This project will use observational data, high-resolution radar composites, dynamical climate models and workshops that engage stakeholders (water resource managers, emergency managers and tribal environmental professionals) in co-production of knowledge. The overarching result of this project will be predictive models to reduce of the societal and economic impacts of extreme precipitation events. Another outcome will include statistical and co-production frameworks, which could be applied across other meteorological extremes, all time scales and in other parts of the world to increase resilience to extreme meteorological events.

  1. Development of a Computer-Adaptive Physical Function Instrument for Social Security Administration Disability Determination

    PubMed Central

    Ni, Pengsheng; McDonough, Christine M.; Jette, Alan M.; Bogusz, Kara; Marfeo, Elizabeth E.; Rasch, Elizabeth K.; Brandt, Diane E.; Meterko, Mark; Chan, Leighton

    2014-01-01

    Objectives To develop and test an instrument to assess physical function (PF) for Social Security Administration (SSA) disability programs, the SSA-PF. Item Response Theory (IRT) analyses were used to 1) create a calibrated item bank for each of the factors identified in prior factor analyses, 2) assess the fit of the items within each scale, 3) develop separate Computer-Adaptive Test (CAT) instruments for each scale, and 4) conduct initial psychometric testing. Design Cross-sectional data collection; IRT analyses; CAT simulation. Setting Telephone and internet survey. Participants Two samples: 1,017 SSA claimants, and 999 adults from the US general population. Interventions None. Main Outcome Measure Model fit statistics, correlation and reliability coefficients, Results IRT analyses resulted in five unidimensional SSA-PF scales: Changing & Maintaining Body Position, Whole Body Mobility, Upper Body Function, Upper Extremity Fine Motor, and Wheelchair Mobility for a total of 102 items. High CAT accuracy was demonstrated by strong correlations between simulated CAT scores and those from the full item banks. Comparing the simulated CATs to the full item banks, very little loss of reliability or precision was noted, except at the lower and upper ranges of each scale. No difference in response patterns by age or sex was noted. The distributions of claimant scores were shifted to the lower end of each scale compared to those of a sample of US adults. Conclusions The SSA-PF instrument contributes important new methodology for measuring the physical function of adults applying to the SSA disability programs. Initial evaluation revealed that the SSA-PF instrument achieved considerable breadth of coverage in each content domain and demonstrated noteworthy psychometric properties. PMID:23578594

  2. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    NASA Astrophysics Data System (ADS)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  3. Development of a computer-adaptive physical function instrument for Social Security Administration disability determination.

    PubMed

    Ni, Pengsheng; McDonough, Christine M; Jette, Alan M; Bogusz, Kara; Marfeo, Elizabeth E; Rasch, Elizabeth K; Brandt, Diane E; Meterko, Mark; Haley, Stephen M; Chan, Leighton

    2013-09-01

    To develop and test an instrument to assess physical function for Social Security Administration (SSA) disability programs, the SSA-Physical Function (SSA-PF) instrument. Item response theory (IRT) analyses were used to (1) create a calibrated item bank for each of the factors identified in prior factor analyses, (2) assess the fit of the items within each scale, (3) develop separate computer-adaptive testing (CAT) instruments for each scale, and (4) conduct initial psychometric testing. Cross-sectional data collection; IRT analyses; CAT simulation. Telephone and Internet survey. Two samples: SSA claimants (n=1017) and adults from the U.S. general population (n=999). None. Model fit statistics, correlation, and reliability coefficients. IRT analyses resulted in 5 unidimensional SSA-PF scales: Changing & Maintaining Body Position, Whole Body Mobility, Upper Body Function, Upper Extremity Fine Motor, and Wheelchair Mobility for a total of 102 items. High CAT accuracy was demonstrated by strong correlations between simulated CAT scores and those from the full item banks. On comparing the simulated CATs with the full item banks, very little loss of reliability or precision was noted, except at the lower and upper ranges of each scale. No difference in response patterns by age or sex was noted. The distributions of claimant scores were shifted to the lower end of each scale compared with those of a sample of U.S. adults. The SSA-PF instrument contributes important new methodology for measuring the physical function of adults applying to the SSA disability programs. Initial evaluation revealed that the SSA-PF instrument achieved considerable breadth of coverage in each content domain and demonstrated noteworthy psychometric properties. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Evaluation and projected changes of precipitation statistics in convection-permitting WRF climate simulations over Central Europe

    NASA Astrophysics Data System (ADS)

    Knist, Sebastian; Goergen, Klaus; Simmer, Clemens

    2018-02-01

    We perform simulations with the WRF regional climate model at 12 and 3 km grid resolution for the current and future climates over Central Europe and evaluate their added value with a focus on the daily cycle and frequency distribution of rainfall and the relation between extreme precipitation and air temperature. First, a 9 year period of ERA-Interim driven simulations is evaluated against observations; then global climate model runs (MPI-ESM-LR RCP4.5 scenario) are downscaled and analyzed for three 12-year periods: a control, a mid-of-century and an end-of-century projection. The higher resolution simulations reproduce both the diurnal cycle and the hourly intensity distribution of precipitation more realistically compared to the 12 km simulation. Moreover, the observed increase of the temperature-extreme precipitation scaling from the Clausius-Clapeyron (C-C) scaling rate of 7% K-1 to a super-adiabatic scaling rate for temperatures above 11 °C is reproduced only by the 3 km simulation. The drop of the scaling rates at high temperatures under moisture limited conditions differs between sub-regions. For both future scenario time spans both simulations suggest a slight decrease in mean summer precipitation and an increase in hourly heavy and extreme precipitation. This increase is stronger in the 3 km runs. Temperature-extreme precipitation scaling curves in the future climate are projected to shift along the 7% K-1 trajectory to higher peak extreme precipitation values at higher temperatures. The curves keep their typical shape of C-C scaling followed by super-adiabatic scaling and a drop-off at higher temperatures due to moisture limitation.

  5. Multiple burn fuel-optimal orbit transfers: Numerical trajectory computation and neighboring optimal feedback guidance

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.

    1995-01-01

    This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.

  6. Predictors of abdominal injuries in blunt trauma.

    PubMed

    Farrath, Samiris; Parreira, José Gustavo; Perlingeiro, Jacqueline A G; Solda, Silvia C; Assef, José Cesar

    2012-01-01

    To identify predictors of abdominal injuries in victims of blunt trauma. retrospective analysis of trauma protocols (collected prospectively) of adult victims of blunt trauma in a period of 15 months. Variables were compared between patients with abdominal injuries (AIS>0) detected by computed tomography or/and laparotomy (group I) and others (AIS=0, group II). Student's t, Fisher and qui-square tests were used for statistical analysis, considering p<0.05 as significant. A total of 3783 cases were included, with a mean age of 39.1 ± 17.7 years (14-99), 76.1% being male. Abdominal injuries were detected in 130 patients (3.4%). Patients sustaining abdominal injuries had significantly lower mean age (35.4 + 15.4 vs. 39.2 + 17.7), lower mean systolic blood pressure on admission (114.7 + 32.4 mmHg vs. 129.1 + 21.7 mmHg), lower mean Glasgow coma scale (12.9 + 3.9 vs. 14.3 + 2.0), as well as higher head AIS (0.95 + 1.5 vs. 0.67 + 1.1), higher thorax AIS (1.10 + 1.5 vs. 0.11 + 0.6) and higher extremities AIS (1.70 ± 1.8 vs. 1.03 ± 1.2). Patients sustaining abdominal injuries also presented higher frequency of severe injuries (AIS>3) in head (18.5% vs. 7.9%), thorax (29.2% vs. 2.4%) and extremities (40.0% vs. 13.7%). The highest odds ratios for the diagnosis of abdominal injuries were associated flail chest (21.8) and pelvic fractures (21.0). Abdominal injuries were more frequently observed in patients with hemodynamic instability, changes in Glasgow coma scale and severe lesions to the head, chest and extremities.

  7. Source imaging of potential fields through a matrix space-domain algorithm

    NASA Astrophysics Data System (ADS)

    Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio

    2017-01-01

    Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gang

    Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, Jill; Corones, James; Batchelor, Donald

    Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individualmore » features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC integrated planning document (IPPA, 2000), represents a significant opportunity for the DOE Office of Science to further the understanding of fusion plasmas to a level unparalleled worldwide.« less

  10. The nonequilibrium quantum many-body problem as a paradigm for extreme data science

    NASA Astrophysics Data System (ADS)

    Freericks, J. K.; Nikolić, B. K.; Frieder, O.

    2014-12-01

    Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.

  11. Synoptic and meteorological drivers of extreme ozone concentrations over Europe

    NASA Astrophysics Data System (ADS)

    Otero, Noelia Felipe; Sillmann, Jana; Schnell, Jordan L.; Rust, Henning W.; Butler, Tim

    2016-04-01

    The present work assesses the relationship between local and synoptic meteorological conditions and surface ozone concentration over Europe in spring and summer months, during the period 1998-2012 using a new interpolated data set of observed surface ozone concentrations over the European domain. Along with local meteorological conditions, the influence of large-scale atmospheric circulation on surface ozone is addressed through a set of airflow indices computed with a novel implementation of a grid-by-grid weather type classification across Europe. Drivers of surface ozone over the full distribution of maximum daily 8-hour average values are investigated, along with drivers of the extreme high percentiles and exceedances or air quality guideline thresholds. Three different regression techniques are applied: multiple linear regression to assess the drivers of maximum daily ozone, logistic regression to assess the probability of threshold exceedances and quantile regression to estimate the meteorological influence on extreme values, as represented by the 95th percentile. The relative importance of the input parameters (predictors) is assessed by a backward stepwise regression procedure that allows the identification of the most important predictors in each model. Spatial patterns of model performance exhibit distinct variations between regions. The inclusion of the ozone persistence is particularly relevant over Southern Europe. In general, the best model performance is found over Central Europe, where the maximum temperature plays an important role as a driver of maximum daily ozone as well as its extreme values, especially during warmer months.

  12. Censored rainfall modelling for estimation of fine-scale extremes

    NASA Astrophysics Data System (ADS)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  13. Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sweby, Peter K.

    1997-01-01

    The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.

  14. High-resolution RCMs as pioneers for future GCMs

    NASA Astrophysics Data System (ADS)

    Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.

    2017-12-01

    Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data sets, the assessment of regional-scale climate feedback processes, and the development of alternative output analysis methodologies.

  15. CFD Study of Full-Scale Aerobic Bioreactors: Evaluation of Dynamic O2 Distribution, Gas-Liquid Mass Transfer and Reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbird, David; Sitaraman, Hariswaran; Stickel, Jonathan

    If advanced biofuels are to measurably displace fossil fuels in the near term, they will have to operate at levels of scale, efficiency, and margin unprecedented in the current biotech industry. For aerobically-grown products in particular, scale-up is complex and the practical size, cost, and operability of extremely large reactors is not well understood. Put simply, the problem of how to attain fuel-class production scales comes down to cost-effective delivery of oxygen at high mass transfer rates and low capital and operating costs. To that end, very large reactor vessels (>500 m3) are proposed in order to achieve favorable economiesmore » of scale. Additionally, techno-economic evaluation indicates that bubble-column reactors are more cost-effective than stirred-tank reactors in many low-viscosity cultures. In order to advance the design of extremely large aerobic bioreactors, we have performed computational fluid dynamics (CFD) simulations of bubble-column reactors. A multiphase Euler-Euler model is used to explicitly account for the spatial distribution of air (i.e., gas bubbles) in the reactor. Expanding on the existing bioreactor CFD literature (typically focused on the hydrodynamics of bubbly flows), our simulations include interphase mass transfer of oxygen and a simple phenomenological reaction representing the uptake and consumption of dissolved oxygen by submerged cells. The simulations reproduce the expected flow profiles, with net upward flow in the center of column and downward flow near the wall. At high simulated oxygen uptake rates (OUR), oxygen-depleted regions can be observed in the reactor. By increasing the gas flow to enhance mixing and eliminate depleted areas, a maximum oxygen transfer (OTR) rate is obtained as a function of superficial velocity. These insights regarding minimum superficial velocity and maximum reactor size are incorporated into NREL's larger techno-economic models to supplement standard reactor design equations.« less

  16. Dynamical systems proxies of atmospheric predictability and mid-latitude extremes

    NASA Astrophysics Data System (ADS)

    Messori, Gabriele; Faranda, Davide; Caballero, Rodrigo; Yiou, Pascal

    2017-04-01

    Extreme weather ocurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. Many extremes (for e.g. storms, heatwaves, cold spells, heavy precipitation) are tied to specific patterns of midlatitude atmospheric circulation. The ability to identify these patterns and use them to enhance the predictability of the extremes is therefore a topic of crucial societal and economic value. We propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We use two simple dynamical systems metrics - local dimension and persistence - to identify sets of similar large-scale atmospheric flow patterns which present a coherent temporal evolution. When these patterns correspond to weather extremes, they therefore afford a particularly good forward predictability. We specifically test this technique on European winter temperatures, whose variability largely depends on the atmospheric circulation in the North Atlantic region. We find that our dynamical systems approach provides predictability of large-scale temperature extremes up to one week in advance.

  17. Microstructural Influence on Deformation and Fatigue Life of Composites Using the Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Murthy, P.; Bednarcyk, B. A.; Pineda, E. J.

    2015-01-01

    A fully coupled deformation and damage approach to modeling the response of composite materials and composite laminates is presented. It is based on the semi-­-analytical generalized method of cells (GMC) micromechanics model as well as its higher fidelity counterpart, HFGMC, both of which provide closed-form constitutive equations for composite materials as well as the micro scale stress and strain fields in the composite phases. The provided constitutive equations allow GMC and HFGMC to function within a higher scale structural analysis (e.g., finite element analysis or lamination theory) to represent a composite material point, while the availability of the micro fields allow the incorporation of lower scale sub­-models to represent local phenomena in the fiber and matrix. Further, GMC's formulation performs averaging when applying certain governing equations such that some degree of microscale field accuracy is surrendered in favor of extreme computational efficiency, rendering the method quite attractive as the centerpiece in a integrated computational material engineering (ICME) structural analysis; whereas HFGMC retains this microscale field accuracy, but at the price of significantly slower computational speed. Herein, the sensitivity of deformation and the fatigue life of graphite/epoxy PMC composites, with both ordered and disordered microstructures, has been investigated using this coupled deformation and damage micromechanics based approach. The local effects of fiber breakage and fatigue damage are included as sub-models that operate on the microscale for the individual composite phases. For analysis of laminates, classical lamination theory is employed as the global or structural scale model, while GMC/HFGMC is embedded to operate on the microscale to simulate the behavior of the composite material within each laminate layer. A key outcome of this study is the statistical influence of microstructure and micromechanics idealization (GMC or HFGMC) on the overall accuracy of unidirectional and laminated composite deformation and fatigue response.

  18. Spiking Excitable Semiconductor Laser as Optical Neurons: Dynamics, Clustering and Global Emerging Behaviors

    DTIC Science & Technology

    2014-06-28

    constructed from inexpensive semiconductor lasers could lead to the development of novel neuro-inspired optical computing devices (threshold detectors ...optical computing devices (threshold detectors , logic gates, signal recognition, etc.). Other topics of research included the analysis of extreme events in...Extreme events is nowadays a highly active field of research. Rogue waves, earthquakes of high magnitude and financial crises are all rare and

  19. Computer Anxiety: How to Measure It?

    ERIC Educational Resources Information Center

    McPherson, Bill

    1997-01-01

    Provides an overview of five scales that are used to measure computer anxiety: Computer Anxiety Index, Computer Anxiety Scale, Computer Attitude Scale, Attitudes toward Computers, and Blombert-Erickson-Lowrey Computer Attitude Task. Includes background information and scale specifics. (JOW)

  20. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph

    2016-09-01

    The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Using horizontal grid spacings of O(1km), convection-resolving weather and climate models allows one to explicitly resolve deep convection. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in supercomputing have led to new hybrid node designs, mixing conventional multi-core hardware and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to these architectures is the COSMO (Consortium for Small-scale Modeling) model.Here we present the convection-resolving COSMO model on continental scales using a version of the model capable of using GPU accelerators. The verification of a week-long simulation containing winter storm Kyrill shows that, for this case, convection-parameterizing simulations and convection-resolving simulations agree well. Furthermore, we demonstrate the applicability of the approach to longer simulations by conducting a 3-month-long simulation of the summer season 2006. Its results corroborate the findings found on smaller domains such as more credible representation of the diurnal cycle of precipitation in convection-resolving models and a tendency to produce more intensive hourly precipitation events. Both simulations also show how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. This includes the formation of sharp cold frontal structures, convection embedded in fronts and small eddies, or the formation and organization of propagating cold pools. Finally, we assess the performance gain from using heterogeneous hardware equipped with GPUs relative to multi-core hardware. With the COSMO model, we now use a weather and climate model that has all the necessary modules required for real-case convection-resolving regional climate simulations on GPUs.

  1. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  2. Portable upper extremity robotics is as efficacious as upper extremity rehabilitative therapy: a randomized controlled pilot trial.

    PubMed

    Page, Stephen J; Hill, Valerie; White, Susan

    2013-06-01

    To compare the efficacy of a repetitive task-specific practice regimen integrating a portable, electromyography-controlled brace called the 'Myomo' versus usual care repetitive task-specific practice in subjects with chronic, moderate upper extremity impairment. Sixteen subjects (7 males; mean age 57.0 ± 11.02 years; mean time post stroke 75.0 ± 87.63 months; 5 left-sided strokes) exhibiting chronic, stable, moderate upper extremity impairment. Subjects were administered repetitive task-specific practice in which they participated in valued, functional tasks using their paretic upper extremities. Both groups were supervised by a therapist and were administered therapy targeting their paretic upper extremities that was 30 minutes in duration, occurring 3 days/week for eight weeks. One group participated in repetitive task-specific practice entirely while wearing the portable robotic, while the other performed the same activity regimen manually. The upper extremity Fugl-Meyer, Canadian Occupational Performance Measure and Stroke Impact Scale were administered on two occasions before intervention and once after intervention. After intervention, groups exhibited nearly identical Fugl-Meyer score increases of ≈2.1 points; the group using robotics exhibited larger score changes on all but one of the Canadian Occupational Performance Measure and Stroke Impact Scale subscales, including a 12.5-point increase on the Stroke Impact Scale recovery subscale. Findings suggest that therapist-supervised repetitive task-specific practice integrating robotics is as efficacious as manual practice in subjects with moderate upper extremity impairment.

  3. Towards a Unified Framework in Hydroclimate Extremes Prediction in Changing Climate

    NASA Astrophysics Data System (ADS)

    Moradkhani, H.; Yan, H.; Zarekarizi, M.; Bracken, C.

    2016-12-01

    Spatio-temporal analysis and prediction of hydroclimate extremes are of paramount importance in disaster mitigation and emergency management. The IPCC special report on managing the risks of extreme events and disasters emphasizes that the global warming would change the frequency, severity, and spatial pattern of extremes. In addition to climate change, land use and land cover changes also influence the extreme characteristics at regional scale. Therefore, natural variability and anthropogenic changes to the hydroclimate system result in nonstationarity in hydroclimate variables. In this presentation recent advancements in developing and using Bayesian approaches to account for non-stationarity in hydroclimate extremes are discussed. Also, implications of these approaches in flood frequency analysis, treatment of spatial dependence, the impact of large-scale climate variability, the selection of cause-effect covariates, with quantification of model errors in extreme prediction is explained. Within this framework, the applicability and usefulness of the ensemble data assimilation for extreme flood predictions is also introduced. Finally, a practical and easy to use approach for better communication with decision-makers and emergency managers is presented.

  4. Measuring water fluxes in forests: The need for integrative platforms of analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Eric J.

    To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less

  5. Measuring water fluxes in forests: The need for integrative platforms of analysis

    DOE PAGES

    Ward, Eric J.

    2016-08-09

    To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less

  6. Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.

  7. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  8. Ultra-Parameterized CAM: Progress Towards Low-Cloud Permitting Superparameterization

    NASA Astrophysics Data System (ADS)

    Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Khairoutdinov, M.; Wyant, M. C.; Singh, B.

    2016-12-01

    A leading source of uncertainty in climate feedback arises from the representation of low clouds, which are not resolved but depend on small-scale physical processes (e.g. entrainment, boundary layer turbulence) that are heavily parameterized. We show results from recent attempts to achieve an explicit representation of low clouds by pushing the computational limits of cloud superparameterization to resolve boundary-layer eddy scales relevant to marine stratocumulus (250m horizontal and 20m vertical length scales). This extreme configuration is called "ultraparameterization". Effects of varying horizontal vs. vertical resolution are analyzed in the context of altered constraints on the turbulent kinetic energy statistics of the marine boundary layer. We show that 250m embedded horizontal resolution leads to a more realistic boundary layer vertical structure, but also to an unrealistic cloud pulsation that cannibalizes time mean LWP. We explore the hypothesis that feedbacks involving horizontal advection (not typically encountered in offline LES that neglect this degree of freedom) may conspire to produce such effects and present strategies to compensate. The results are relevant to understanding the emergent behavior of quasi-resolved low cloud decks in a multi-scale modeling framework within a previously unencountered grey zone of better resolved boundary-layer turbulence.

  9. RIKEN BNL Research Center

    NASA Astrophysics Data System (ADS)

    Samios, Nicholas

    2014-09-01

    Since its inception in 1997, the RIKEN BNL Research Center (RBRC) has been a major force in the realms of Spin Physics, Relativistic Heavy Ion Physics, large scale Computing Physics and the training of a new generation of extremely talented physicists. This has been accomplished through the recruitment of an outstanding non-permanent staff of Fellows and Research associates in theory and experiment. RBRC is now a mature organization that has reached a steady level in the size of scientific and support staff while at the same time retaining its vibrant youth. A brief history of the scientific accomplishments and contributions of the RBRC physicists will be presented as well as a discussion of the unique RBRC management structure.

  10. Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger

    QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced modelsmore » to be used in estimation and inference.« less

  11. The NASA Energy and Water Cycle Extreme (NEWSE) Integration Project

    NASA Technical Reports Server (NTRS)

    House, P. R.; Lapenta, W.; Schiffer, R.

    2008-01-01

    Skillful predictions of water and energy cycle extremes (flood and drought) are elusive. To better understand the mechanisms responsible for water and energy extremes, and to make decisive progress in predicting these extremes, the collaborative NASA Energy and Water cycle Extremes (NEWSE) Integration Project, is studying these extremes in the U.S. Southern Great Plains (SGP) during 2006-2007, including their relationships with continental and global scale processes, and assessment of their predictability on multiple space and time scales. It is our hypothesis that an integrative analysis of observed extremes which reflects the current understanding of the role of SST and soil moisture variability influences on atmospheric heating and forcing of planetary waves, incorporating recently available global and regional hydro- meteorological datasets (i.e., precipitation, water vapor, clouds, etc.) in conjunction with advances in data assimilation, can lead to new insights into the factors that lead to persistent drought and flooding. We will show initial results of this project, whose goals are to provide an improved definition, attribution and prediction on sub-seasonal to interannual time scales, improved understanding of the mechanisms of decadal drought and its predictability, including the impacts of SST variability and deep soil moisture variability, and improved monitoring/attributions, with transition to applications; a bridging of the gap between hydrological forecasts and stakeholders (utilization of probabilistic forecasts, education, forecast interpretation for different sectors, assessment of uncertainties for different sectors, etc.).

  12. Trends in mean and extreme temperatures over Ibadan, Southwest Nigeria

    NASA Astrophysics Data System (ADS)

    Abatan, Abayomi A.; Osayomi, Tolulope; Akande, Samuel O.; Abiodun, Babatunde J.; Gutowski, William J.

    2018-02-01

    In recent times, Ibadan has been experiencing an increase in mean temperature which appears to be linked to anthropogenic global warming. Previous studies have indicated that the warming may be accompanied by changes in extreme events. This study examined trends in mean and extreme temperatures over Ibadan during 1971-2012 at annual and seasonal scales using the high-resolution atmospheric reanalysis from European Centre for Medium-Range Weather Forecasts (ECMWF) twentieth-century dataset (ERA-20C) at 15 grid points. Magnitudes of linear trends in mean and extreme temperatures and their statistical significance were calculated using ordinary least squares and Mann-Kendall rank statistic tests. The results show that Ibadan has witnessed an increase in annual and seasonal mean minimum temperatures. The annual mean maximum temperature exhibited a non-significant decline in most parts of Ibadan. While trends in cold extremes at annual scale show warming, trends in coldest night show greater warming than in coldest day. At the seasonal scale, we found that Ibadan experienced a mix of positive and negative trends in absolute extreme temperature indices. However, cold extremes show the largest trend magnitudes, with trends in coldest night showing the greatest warming. The results compare well with those obtained from a limited number of stations. This study should inform decision-makers and urban planners about the ongoing warming in Ibadan.

  13. Ready…, Set, Go!. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by Michael A. Arbib

    NASA Astrophysics Data System (ADS)

    Iriki, Atsushi

    2016-03-01

    ;Language-READY brain; in the title of this article [1] seems to be the expression that the author prefers to use to illustrate his theoretical framework. The usage of the term ;READY; appears to be of extremely deep connotation, for three reasons. Firstly, of course it needs a ;principle; - the depth and the width of the computational theory depicted here is as expected from the author's reputation. However, ;readiness; implies that it is much more than just ;a theory;. That is, such a principle is not static, but it rather has dynamic properties, which are ready to gradually proceed to flourish once brains are put in adequate conditions to make time progressions - namely, evolution and development. So the second major connotation is that this article brought in the perspectives of the comparative primatology as a tool to relativise the language-realizing human brains among other animal species, primates in particular, in the context of evolutionary time scale. The tertiary connotation lies in the context of the developmental time scale. The author claims that it is the interaction of the newborn with its care takers, namely its mother and other family or social members in its ecological conditions, that brings the brain mechanism subserving language faculty to really mature to its final completion. Taken together, this article proposes computational theories and mechanisms of Evo-Devo-Eco interactions for language acquisition in the human brains.

  14. Sequence analysis by iterated maps, a review.

    PubMed

    Almeida, Jonas S

    2014-05-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, 'Chaos Game Representation'. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results.

  15. Pore-level numerical analysis of the infrared surface temperature of metallic foam

    NASA Astrophysics Data System (ADS)

    Li, Yang; Xia, Xin-Lin; Sun, Chuang; Tan, He-Ping; Wang, Jing

    2017-10-01

    Open-cell metallic foams are increasingly used in various thermal systems. The temperature distributions are significant for the comprehensive understanding of these foam-based engineering applications. This study aims to numerically investigate the modeling of the infrared surface temperature (IRST) of open-cell metallic foam measured by an infrared camera placed above the sample. Two typical approaches based on Backward Monte Carlo simulation are developed to estimate the IRSTs: the first one, discrete-scale approach (DSA), uses a realistic discrete representation of the foam structure obtained from a computed tomography reconstruction while the second one, continuous-scale approach (CSA), assumes that the foam sample behaves like a continuous homogeneous semi-transparent medium. The radiative properties employed in CSA are directly determined by a ray-tracing process inside the discrete foam representation. The IRSTs for different material properties (material emissivity, specularity parameter) are computed by the two approaches. The results show that local IRSTs can vary according to the local compositions of the foam surface (void and solid). The temperature difference between void and solid areas is gradually attenuated with increasing material emissivity. In addition, the annular void space near to the foam surface behaves like a black cavity for thermal radiation, which is ensued by copious neighboring skeletons. For most of the cases studied, the mean IRSTs computed by the DSA and CSA are close to each other, except when the material emissivity is highly weakened and the sample temperature is extremely high.

  16. Weather extremes in very large, high-resolution ensembles: the weatherathome experiment

    NASA Astrophysics Data System (ADS)

    Allen, M. R.; Rosier, S.; Massey, N.; Rye, C.; Bowery, A.; Miller, J.; Otto, F.; Jones, R.; Wilson, S.; Mote, P.; Stone, D. A.; Yamazaki, Y. H.; Carrington, D.

    2011-12-01

    Resolution and ensemble size are often seen as alternatives in climate modelling. Models with sufficient resolution to simulate many classes of extreme weather cannot normally be run often enough to assess the statistics of rare events, still less how these statistics may be changing. As a result, assessments of the impact of external forcing on regional climate extremes must be based either on statistical downscaling from relatively coarse-resolution models, or statistical extrapolation from 10-year to 100-year events. Under the weatherathome experiment, part of the climateprediction.net initiative, we have compiled the Met Office Regional Climate Model HadRM3P to run on personal computer volunteered by the general public at 25 and 50km resolution, embedded within the HadAM3P global atmosphere model. With a global network of about 50,000 volunteers, this allows us to run time-slice ensembles of essentially unlimited size, exploring the statistics of extreme weather under a range of scenarios for surface forcing and atmospheric composition, allowing for uncertainty in both boundary conditions and model parameters. Current experiments, developed with the support of Microsoft Research, focus on three regions, the Western USA, Europe and Southern Africa. We initially simulate the period 1959-2010 to establish which variables are realistically simulated by the model and on what scales. Our next experiments are focussing on the Event Attribution problem, exploring how the probability of various types of extreme weather would have been different over the recent past in a world unaffected by human influence, following the design of Pall et al (2011), but extended to a longer period and higher spatial resolution. We will present the first results of the unique, global, participatory experiment and discuss the implications for the attribution of recent weather events to anthropogenic influence on climate.

  17. Robust increase in extreme summer rainfall intensity during the past four decades observed in China

    NASA Astrophysics Data System (ADS)

    Xiao, Chan; Wu, Peili; Zhang, Lixia; Song, Lianchun

    2016-12-01

    Global warming increases the moisture holding capacity of the atmosphere and consequently the potential risks of extreme rainfall. Here we show that maximum hourly summer rainfall intensity has increased by about 11.2% on average, using continuous hourly gauge records for 1971-2013 from 721 weather stations in China. The corresponding event accumulated precipitation has on average increased by more than 10% aided by a small positive trend in events duration. Linear regression of the 95th percentile daily precipitation intensity with daily mean surface air temperature shows a negative scaling of -9.6%/K, in contrast to a positive scaling of 10.6%/K for hourly data. This is made up of a positive scaling below the summer mean temperature and a negative scaling above. Using seasonal means instead of daily means, we find a consistent scaling rate for the region of 6.7-7%/K for both daily and hourly precipitation extremes, about 10% higher than the regional Clausius-Clapeyron scaling of 6.1%/K based on a mean temperature of 24.6 °C. With up to 18% further increase in extreme precipitation under continuing global warming towards the IPCC’s 1.5 °C target, risks of flash floods will exacerbate on top of the current incapability of urban drainage systems in a rapidly urbanizing China.

  18. Grassland responses to precipitation extremes

    USDA-ARS?s Scientific Manuscript database

    Grassland ecosystems are naturally subjected to periods of prolonged drought and sequences of wet years. Climate change is expected to enhance the magnitude and frequency of extreme events at the intraannual and multiyear scales. Are grassland responses to extreme precipitation simply a response to ...

  19. MaRIE theory, modeling and computation roadmap executive summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lookman, Turab

    The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained optimization of hardware and software, and experiments. MaRIE's theory, modeling, and computation (TMC) roadmap efforts have paralleled 'MaRIE First Experiments' science activities in the areas of materials dynamics, irradiated materials and complex functional materials in extreme conditions. The documents that follow this executive summary describe in detail for each of these areas the current state of the art, the gaps that exist and the road mapmore » to MaRIE and beyond. Here we integrate the various elements to articulate an overarching theme related to the role and consequences of heterogeneities which manifest as competing states in a complex energy landscape. MaRIE experiments will locate, measure and follow the dynamical evolution of these heterogeneities. Our TMC vision spans the various pillar science and highlights the key theoretical and experimental challenges. We also present a theory, modeling and computation roadmap of the path to and beyond MaRIE in each of the science areas.« less

  20. Exascale computing and what it means for shock physics

    NASA Astrophysics Data System (ADS)

    Germann, Timothy

    2015-06-01

    The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.

  1. Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube

    NASA Astrophysics Data System (ADS)

    Zhou, Guangzhao; Xu, Kun; Liu, Feng

    2018-01-01

    The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.

  2. A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Tian, X.

    2017-12-01

    The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.

  3. Numerical Simulation of DC Coronal Heating

    NASA Astrophysics Data System (ADS)

    Dahlburg, Russell B.; Einaudi, G.; Taylor, Brian D.; Ugarte-Urra, Ignacio; Warren, Harry; Rappazzo, A. F.; Velli, Marco

    2016-05-01

    Recent research on observational signatures of turbulent heating of a coronal loop will be discussed. The evolution of the loop is is studied by means of numerical simulations of the fully compressible three-dimensional magnetohydrodynamic equations using the HYPERION code. HYPERION calculates the full energy cycle involving footpoint convection, magnetic reconnection, nonlinear thermal conduction and optically thin radiation. The footpoints of the loop magnetic field are convected by random photospheric motions. As a consequence the magnetic field in the loop is energized and develops turbulent nonlinear dynamics characterized by the continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Dissipation is non-uniformly distributed so that only a fraction of thecoronal mass and volume gets heated at any time. Temperature and density are highly structured at scales which, in the solar corona, remain observationally unresolved: the plasma of the simulated loop is multi thermal, where highly dynamical hotter and cooler plasma strands are scattered throughout the loop at sub-observational scales. Typical simulated coronal loops are 50000 km length and have axial magnetic field intensities ranging from 0.01 to 0.04 Tesla. To connect these simulations to observations the computed number densities and temperatures are used to synthesize the intensities expected in emission lines typically observed with the Extreme ultraviolet Imaging Spectrometer (EIS) on Hinode. These intensities are then employed to compute differential emission measure distributions, which are found to be very similar to those derived from observations of solar active regions.

  4. Observational Signatures of Coronal Heating

    NASA Astrophysics Data System (ADS)

    Dahlburg, R. B.; Einaudi, G.; Ugarte-Urra, I.; Warren, H. P.; Rappazzo, A. F.; Velli, M.; Taylor, B.

    2016-12-01

    Recent research on observational signatures of turbulent heating of a coronal loop will be discussed. The evolution of the loop is is studied by means of numericalsimulations of the fully compressible three-dimensionalmagnetohydrodynamic equations using the HYPERION code. HYPERION calculates the full energy cycle involving footpoint convection, magnetic reconnection,nonlinear thermal conduction and optically thin radiation.The footpoints of the loop magnetic field are convected by random photospheric motions. As a consequence the magnetic field in the loop is energized and develops turbulent nonlinear dynamics characterized by the continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Dissipation is non-uniformly distributed so that only a fraction of thecoronal mass and volume gets heated at any time. Temperature and density are highly structured at scales which, in the solar corona, remain observationally unresolved: the plasma of the simulated loop is multi-thermal, where highly dynamical hotter and cooler plasma strands arescattered throughout the loop at sub-observational scales. Typical simulated coronal loops are 50000 km length and have axial magnetic field intensities ranging from 0.01 to 0.04 Tesla.To connect these simulations to observations the computed numberdensities and temperatures are used to synthesize the intensities expected inemission lines typically observed with the Extreme ultraviolet Imaging Spectrometer(EIS) on Hinode. These intensities are then employed to compute differentialemission measure distributions, which are found to be very similar to those derivedfrom observations of solar active regions.

  5. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit

    PubMed Central

    Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R.; Smith, Jeremy C.; Kasson, Peter M.; van der Spoel, David; Hess, Berk; Lindahl, Erik

    2013-01-01

    Motivation: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Results: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. Availability: GROMACS is an open source and free software available from http://www.gromacs.org. Contact: erik.lindahl@scilifelab.se Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23407358

  6. The Robust Relationship Between Extreme Precipitation and Convective Organization in Idealized Numerical Modeling Simulations

    NASA Astrophysics Data System (ADS)

    Bao, Jiawei; Sherwood, Steven C.; Colin, Maxime; Dixit, Vishal

    2017-10-01

    The behavior of tropical extreme precipitation under changes in sea surface temperatures (SSTs) is investigated with the Weather Research and Forecasting Model (WRF) in three sets of idealized simulations: small-domain tropical radiative-convective equilibrium (RCE), quasi-global "aquapatch", and RCE with prescribed mean ascent from the tropical band in the aquapatch. We find that, across the variations introduced including SST, large-scale circulation, domain size, horizontal resolution, and convective parameterization, the change in the degree of convective organization emerges as a robust mechanism affecting extreme precipitation. Higher ratios of change in extreme precipitation to change in mean surface water vapor are associated with increases in the degree of organization, while lower ratios correspond to decreases in the degree of organization. The spread of such changes is much larger in RCE than aquapatch tropics, suggesting that small RCE domains may be unreliable for assessing the temperature-dependence of extreme precipitation or convective organization. When the degree of organization does not change, simulated extreme precipitation scales with surface water vapor. This slightly exceeds Clausius-Clapeyron (CC) scaling, because the near-surface air warms 10-25% faster than the SST in all experiments. Also for simulations analyzed here with convective parameterizations, there is an increasing trend of organization with SST.

  7. Inter-annual Variability of Temperature and Extreme Heat Events during the Nairobi Warm Season

    NASA Astrophysics Data System (ADS)

    Scott, A.; Misiani, H. O.; Zaitchik, B. F.; Ouma, G. O.; Anyah, R. O.; Jordan, A.

    2016-12-01

    Extreme heat events significantly stress all organisms in the ecosystem, and are likely to be amplified in peri-urban and urban areas. Understanding the variability and drivers behind these events is key to generating early warnings, yet in Equatorial East Africa, this information is currently unavailable. This study uses daily maximum and minimum temperature records from weather stations within Nairobi and its surroundings to characterize variability in daily minimum temperatures and the number of extreme heat events. ERA-Interim reanalysis is applied to assess the drivers of these events at event and seasonal time scales. At seasonal time scales, high temperatures in Nairobi are a function of large scale climate variability associated with the Atlantic Multi-decadal Oscillation (AMO) and Global Mean Sea Surface Temperature (GMSST). Extreme heat events, however, are more strongly associated with the El Nino Southern Oscillation (ENSO). For instance, the persistence of AMO and ENSO, in particular, provide a basis for seasonal prediction of extreme heat events/days in Nairobi. It is also apparent that the temporal signal from extreme heat events in tropics differs from classic heat wave definitions developed in the mid-latitudes, which suggests that a new approach for defining these events is necessary for tropical regions.

  8. Precipitation extreme changes exceeding moisture content increases in MIROC and IPCC climate models

    PubMed Central

    Sugiyama, Masahiro; Shiogama, Hideo; Emori, Seita

    2010-01-01

    Precipitation extreme changes are often assumed to scale with, or are constrained by, the change in atmospheric moisture content. Studies have generally confirmed the scaling based on moisture content for the midlatitudes but identified deviations for the tropics. In fact half of the twelve selected Intergovernmental Panel on Climate Change (IPCC) models exhibit increases faster than the climatological-mean precipitable water change for high percentiles of tropical daily precipitation, albeit with significant intermodel scatter. Decomposition of the precipitation extreme changes reveals that the variations among models can be attributed primarily to the differences in the upward velocity. Both the amplitude and vertical profile of vertical motion are found to affect precipitation extremes. A recently proposed scaling that incorporates these dynamical effects can capture the basic features of precipitation changes in both the tropics and midlatitudes. In particular, the increases in tropical precipitation extremes significantly exceed the precipitable water change in Model for Interdisciplinary Research on Climate (MIROC), a coupled general circulation model with the highest resolution among IPCC climate models whose precipitation characteristics have been shown to reasonably match those of observations. The expected intensification of tropical disturbances points to the possibility of precipitation extreme increases beyond the moisture content increase as is found in MIROC and some of IPCC models. PMID:20080720

  9. The scaling of population persistence with carrying capacity does not asymptote in populations of a fish experiencing extreme climate variability.

    PubMed

    White, Richard S A; Wintle, Brendan A; McHugh, Peter A; Booker, Douglas J; McIntosh, Angus R

    2017-06-14

    Despite growing concerns regarding increasing frequency of extreme climate events and declining population sizes, the influence of environmental stochasticity on the relationship between population carrying capacity and time-to-extinction has received little empirical attention. While time-to-extinction increases exponentially with carrying capacity in constant environments, theoretical models suggest increasing environmental stochasticity causes asymptotic scaling, thus making minimum viable carrying capacity vastly uncertain in variable environments. Using empirical estimates of environmental stochasticity in fish metapopulations, we showed that increasing environmental stochasticity resulting from extreme droughts was insufficient to create asymptotic scaling of time-to-extinction with carrying capacity in local populations as predicted by theory. Local time-to-extinction increased with carrying capacity due to declining sensitivity to demographic stochasticity, and the slope of this relationship declined significantly as environmental stochasticity increased. However, recent 1 in 25 yr extreme droughts were insufficient to extirpate populations with large carrying capacity. Consequently, large populations may be more resilient to environmental stochasticity than previously thought. The lack of carrying capacity-related asymptotes in persistence under extreme climate variability reveals how small populations affected by habitat loss or overharvesting, may be disproportionately threatened by increases in extreme climate events with global warming. © 2017 The Author(s).

  10. Using an Improved SIFT Algorithm and Fuzzy Closed-Loop Control Strategy for Object Recognition in Cluttered Scenes

    PubMed Central

    Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo

    2015-01-01

    Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094

  11. Weighted mining of massive collections of [Formula: see text]-values by convex optimization.

    PubMed

    Dobriban, Edgar

    2018-06-01

    Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).

  12. Large-scale protein-protein interactions detection by integrating big biosensing data with computational model.

    PubMed

    You, Zhu-Hong; Li, Shuai; Gao, Xin; Luo, Xin; Ji, Zhen

    2014-01-01

    Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection.

  13. Extreme reaction times determine fluctuation scaling in human color vision

    NASA Astrophysics Data System (ADS)

    Medina, José M.; Díaz, José A.

    2016-11-01

    In modern mental chronometry, human reaction time defines the time elapsed from stimulus presentation until a response occurs and represents a reference paradigm for investigating stochastic latency mechanisms in color vision. Here we examine the statistical properties of extreme reaction times and whether they support fluctuation scaling in the skewness-kurtosis plane. Reaction times were measured for visual stimuli across the cardinal directions of the color space. For all subjects, the results show that very large reaction times deviate from the right tail of reaction time distributions suggesting the existence of dragon-kings events. The results also indicate that extreme reaction times are correlated and shape fluctuation scaling over a wide range of stimulus conditions. The scaling exponent was higher for achromatic than isoluminant stimuli, suggesting distinct generative mechanisms. Our findings open a new perspective for studying failure modes in sensory-motor communications and in complex networks.

  14. Structural and Mechanical Properties of Intermediate Filaments under Extreme Conditions and Disease

    NASA Astrophysics Data System (ADS)

    Qin, Zhao

    Intermediate filaments are one of the three major components of the cytoskeleton in eukaryotic cells. It was discovered during the recent decades that intermediate filament proteins play key roles to reinforce cells subjected to large-deformation as well as participate in signal transduction. However, it is still poorly understood how the nanoscopic structure, as well as the biochemical properties of these protein molecules contribute to their biomechanical functions. In this research we investigate the material function of intermediate filaments under various extreme mechanical conditions as well as disease states. We use a full atomistic model and study its response to mechanical stresses. Learning from the mechanical response obtained from atomistic simulations, we build mesoscopic models following the finer-trains-coarser principles. By using this multiple-scale model, we present a detailed analysis of the mechanical properties and associated deformation mechanisms of intermediate filament network. We reveal the mechanism of a transition from alpha-helices to beta-sheets with subsequent intermolecular sliding under mechanical force, which has been inferred previously from experimental results. This nanoscale mechanism results in a characteristic nonlinear force-extension curve, which leads to a delocalization of mechanical energy and prevents catastrophic fracture. This explains how intermediate filament can withstand extreme mechanical deformation of > 1 00% strain despite the presence of structural defects. We combine computational and experimental techniques to investigate the molecular mechanism of Hutchinson-Gilford progeria syndrome, a premature aging disease. We find that the mutated lamin tail .domain is more compact and stable than the normal one. This altered structure and stability may enhance the association of intermediate filaments with the nuclear membrane, providing a molecular mechanism of the disease. We study the nuclear membrane association with intermediate filaments by focusing on the effect of calcium on the maturation process of lamin A. Our result shows that calcium plays a regulatory role in the post-translational processing of lam in A by tuning its molecular conformation and mechanics. Based on these findings we demonstrate that multiple-scale computational modeling provides a useful tool in understanding the biomechanical property and disease mechanism of intermediate filaments. We provide a perspective on research opportunities to improve the foundation for engineering the mechanical and biochemical functions of biomaterials. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  15. Holistic view to integrated climate change assessment and extreme weather adaptation in the Lake Victoria Basin East Africa

    NASA Astrophysics Data System (ADS)

    Mutua, F.; Koike, T.

    2013-12-01

    Extreme weather events have been the leading cause of disasters and damage all over the world.The primary ingredient to these disasters especially floods is rainfall which over the years, despite advances in modeling, computing power and use of new data and technologies, has proven to be difficult to predict. Also, recent climate projections showed a pattern consistent with increase in the intensity and frequency of extreme events in the East African region.We propose a holistic integrated approach to climate change assessment and extreme event adaptation through coupling of analysis techniques, tools and data. The Lake Victoria Basin (LVB) in East Africa supports over three million livelihoods and is a valuable resource to five East African countries as a source of water and means of transport. However, with a Mesoscale weather regime driven by land and lake dynamics,extreme Mesoscale events have been prevalent and the region has been on the receiving end during anomalously wet years in the region. This has resulted in loss of lives, displacements, and food insecurity. In the LVB, the effects of climate change are increasingly being recognized as a significant contributor to poverty, by its linkage to agriculture, food security and water resources. Of particular importance are the likely impacts of climate change in frequency and intensity of extreme events. To tackle this aspect, this study adopted an integrated regional, mesoscale and basin scale approach to climate change assessment. We investigated the projected changes in mean climate over East Africa, diagnosed the signals of climate change in the atmosphere, and transferred this understanding to mesoscale and basin scale. Changes in rainfall were analyzed and similar to the IPCC AR4 report; the selected three General Circulation Models (GCMs) project a wetter East Africa with intermittent dry periods in June-August. Extreme events in the region are projected to increase; with the number of wet days exceeding the 90% percentile of 1981-2000 likely to increase by 20-40% in the whole region. We also focused on short-term weather forecasting as a step towards adapting to a changing climate. This involved dynamic downscaling of global weather forecasts to high resolution with a special focus on extreme events. By utilizing complex model dynamics, the system was able to reproduce the Mesoscale dynamics well, simulated the land/lake breeze and diurnal pattern but was inadequate in some aspects. The quantitative prediction of rainfall was inaccurate with overestimation and misplacement but with reasonable occurrence. To address these shortcomings we investigated the value added by assimilating Advanced Microwave Scanning Radiometer (AMSR-E) brightness temperature during the event. By assimilating 23GHz (sensitive to water) and 89GHz (sensitive to cloud) frequency brightness temperature; the predictability of an extreme rain weather event was investigated. The assimilation through a Cloud Microphysics Data Assimilation (CMDAS) into the weather prediction model considerably improved the spatial distribution of this event.

  16. Climate teleconnections, weather extremes, and vector-borne disease outbreaks

    USDA-ARS?s Scientific Manuscript database

    Fluctuations in climate lead to extremes in temperature, rainfall, flooding, and droughts. These climate extremes create ideal ecological conditions that promote mosquito-borne disease transmission that impact global human and animal health. One well known driver of such global scale climate fluctua...

  17. Automatic detection and analysis of cell motility in phase-contrast time-lapse images using a combination of maximally stable extremal regions and Kalman filter approaches.

    PubMed

    Kaakinen, M; Huttunen, S; Paavolainen, L; Marjomäki, V; Heikkilä, J; Eklund, L

    2014-01-01

    Phase-contrast illumination is simple and most commonly used microscopic method to observe nonstained living cells. Automatic cell segmentation and motion analysis provide tools to analyze single cell motility in large cell populations. However, the challenge is to find a sophisticated method that is sufficiently accurate to generate reliable results, robust to function under the wide range of illumination conditions encountered in phase-contrast microscopy, and also computationally light for efficient analysis of large number of cells and image frames. To develop better automatic tools for analysis of low magnification phase-contrast images in time-lapse cell migration movies, we investigated the performance of cell segmentation method that is based on the intrinsic properties of maximally stable extremal regions (MSER). MSER was found to be reliable and effective in a wide range of experimental conditions. When compared to the commonly used segmentation approaches, MSER required negligible preoptimization steps thus dramatically reducing the computation time. To analyze cell migration characteristics in time-lapse movies, the MSER-based automatic cell detection was accompanied by a Kalman filter multiobject tracker that efficiently tracked individual cells even in confluent cell populations. This allowed quantitative cell motion analysis resulting in accurate measurements of the migration magnitude and direction of individual cells, as well as characteristics of collective migration of cell groups. Our results demonstrate that MSER accompanied by temporal data association is a powerful tool for accurate and reliable analysis of the dynamic behaviour of cells in phase-contrast image sequences. These techniques tolerate varying and nonoptimal imaging conditions and due to their relatively light computational requirements they should help to resolve problems in computationally demanding and often time-consuming large-scale dynamical analysis of cultured cells. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  18. A Generalized Framework for Non-Stationary Extreme Value Analysis

    NASA Astrophysics Data System (ADS)

    Ragno, E.; Cheng, L.; Sadegh, M.; AghaKouchak, A.

    2017-12-01

    Empirical trends in climate variables including precipitation, temperature, snow-water equivalent at regional to continental scales are evidence of changes in climate over time. The evolving climate conditions and human activity-related factors such as urbanization and population growth can exert further changes in weather and climate extremes. As a result, the scientific community faces an increasing demand for updated appraisal of the time-varying climate extremes. The purpose of this study is to offer a robust and flexible statistical tool for non-stationary extreme value analysis which can better characterize the severity and likelihood of extreme climatic variables. This is critical to ensure a more resilient environment in a changing climate. Following the positive feedback on the first version of Non-Stationary Extreme Value Analysis (NEVA) Toolbox by Cheng at al. 2014, we present an improved version, i.e. NEVA2.0. The upgraded version herein builds upon a newly-developed hybrid evolution Markov Chain Monte Carlo (MCMC) approach for numerical parameters estimation and uncertainty assessment. This addition leads to a more robust uncertainty estimates of return levels, return periods, and risks of climatic extremes under both stationary and non-stationary assumptions. Moreover, NEVA2.0 is flexible in incorporating any user-specified covariate other than the default time-covariate (e.g., CO2 emissions, large scale climatic oscillation patterns). The new feature will allow users to examine non-stationarity of extremes induced by physical conditions that underlie the extreme events (e.g. antecedent soil moisture deficit, large-scale climatic teleconnections, urbanization). In addition, the new version offers an option to generate stationary and/or non-stationary rainfall Intensity - Duration - Frequency (IDF) curves that are widely used for risk assessment and infrastructure design. Finally, a Graphical User Interface (GUI) of the package is provided, making NEVA accessible to a broader audience.

  19. [Process strategy for ethanol production from lignocellulose feedstock under extremely low water usage and high solids loading conditions].

    PubMed

    Zhang, Jian; Chu, Deqiang; Yu, Zhanchun; Zhang, Xiaoxi; Deng, Hongbo; Wang, Xiusheng; Zhu, Zhinan; Zhang, Huaiqing; Dai, Gance; Bao, Jie

    2010-07-01

    The massive water and steam are consumed in the production of cellulose ethanol, which correspondingly results in the significant increase of energy cost, waster water discharge and production cost as well. In this study, the process strategy under extremely low water usage and high solids loading of corn stover was investigated experimentally and computationally. The novel pretreatment technology with zero waste water discharge was developed; in which a unique biodetoxification method using a kerosene fungus strain Amorphotheca resinae ZN1 to degrade the lignocellulose derived inhibitors was applied. With high solids loading of pretreated corn stover, high ethanol titer was achieved in the simultaneous saccharification and fermentation process, and the scale-up principles were studied. Furthermore, the flowsheet simulation of the whole process was carried out with the Aspen plus based physical database, and the integrated process developed was tested in the biorefinery mini-plant. Finally, the core technologies were applied in the cellulose ethanol demonstration plant, which paved a way for the establishment of an energy saving and environment friendly technology of lignocellulose biotransformation with industry application potential.

  20. Efficacy of dietary supplement with nutraceutical composed combined with extremely-low-frequency electromagnetic fields in carpal tunnel syndrome

    PubMed Central

    Paolucci, Teresa; Piccinini, Giulia; Nusca, Sveva Maria; Marsilli, Gabriella; Mannocci, Alice; La Torre, Giuseppe; Saraceni, Vincenzo Maria; Vulpiani, Maria Chiara; Villani, Ciro

    2018-01-01

    [Purpose] The aim of this study was to investigate the clinical effects of a nutraceutical composed (Xinepa®) combined with extremely-low-frequency electromagnetic fields in the carpal tunnel syndrome. [Subjects and Methods] Thirty-one patients with carpal tunnel syndrome were randomized into group 1-A (N=16) (nutraceutical + extremely-low-frequency electromagnetic fields) and group 2-C (n=15) (placebo + extremely-low-frequency electromagnetic fields). The dietary supplement with nutraceutical was twice daily for one month in the 1-A group and both groups received extremely-low-frequency electromagnetic fields at the level of the carpal tunnel 3 times per week for 12 sessions. The Visual Analogue Scale for pain, the Symptoms Severity Scale and Functional Severity Scale of the Boston Carpal Tunnel Questionnaire were used at pre-treatment (T0), after the end of treatment (T1) and at 3 months post-treatment (T2). [Results] At T1 and T2 were not significant differences in outcome measures between the two groups. In group 1-A a significant improvement in the scales were observed at T1 and T2. In group 2-C it was observed only at T1. [Conclusion] Significant clinical effects from pre-treatment to the end of treatment were shown in both groups. Only in group 1-A they were maintained at 3 months post-treatment.

  1. Portable Upper Extremity Robotics is as Efficacious as Upper Extremity Rehabilitative Therapy: A Randomized Controlled Pilot Trial

    PubMed Central

    Page, Stephen J.; Hill, Valerie; White, Susan

    2013-01-01

    Objective To compare the efficacy of a repetitive task specific practice regimen integrating a portable, electromyography-controlled brace called the “Myomo” versus usual care repetitive task specific practice in subjects with chronic, moderate upper extremity impairment. Subjects 16 subjects (7 males; mean age = 57.0 ± 11.02 years; mean time post stroke = 75.0 ± 87.63 months; 5 left-sided strokes) exhibiting chronic, stable, moderate upper extremity impairment. Interventions Subjects were administered repetitive task specific practice in which they participated in valued, functional tasks using their paretic upper extremities. Both groups were supervised by a therapist and were administered therapy targeting their paretic upper extremities that was 30-minutes in duration, occurring 3 days/week for 8 weeks. However, one group participated in repetitive task specific practice entirely while wearing the portable robotic while the other performed the same activity regimen manually.. Main Outcome Measures The upper extremity Fugl-Meyer, Canadian Occupational Performance measure and Stroke Impact Scale were administered on two occasions before intervention and once after intervention. Results After intervention, groups exhibited nearly-identical Fugl-Meyer score increases of ≈ 2.1 points; the group using robotics exhibited larger score changes on all but one of the Canadian occupational performance measure and Stroke Impact Scale subscales, including a 12.5-point increase on the Stroke Impact Scale recovery subscale. Conclusions Findings suggest that therapist-supervised repetitive task specific practice integrating robotics is as efficacious as manual in subjects with moderate upper extremity impairment. PMID:23147552

  2. Numerical experiments to explain multiscale hydrological responses to mountain pine beetle tree mortality in a headwater watershed

    USGS Publications Warehouse

    Penn, Colin A.; Bearup, Lindsay A.; Maxwell, Reed M.; Clow, David W.

    2016-01-01

    The effects of mountain pine beetle (MPB)-induced tree mortality on a headwater hydrologic system were investigated using an integrated physical modeling framework with a high-resolution computational grid. Simulations of MPB-affected and unaffected conditions, each with identical atmospheric forcing for a normal water year, were compared at multiple scales to evaluate the effects of scale on MPB-affected hydrologic systems. Individual locations within the larger model were shown to maintain hillslope-scale processes affecting snowpack dynamics, total evapotranspiration, and soil moisture that are comparable to several field-based studies and previous modeling work. Hillslope-scale analyses also highlight the influence of compensating changes in evapotranspiration and snow processes. Reduced transpiration in the Grey Phase of MPB-induced tree mortality was offset by increased late-summer evaporation, while overall snowpack dynamics were more dependent on elevation effects than MPB-induced tree mortality. At the watershed scale, unaffected areas obscured the magnitude of MPB effects. Annual water yield from the watershed increased during Grey Phase simulations by 11 percent; a difference that would be difficult to diagnose with long-term gage observations that are complicated by inter-annual climate variability. The effects on hydrology observed and simulated at the hillslope scale can be further damped at the watershed scale, which spans more life zones and a broader range of landscape properties. These scaling effects may change under extreme conditions, e.g., increased total MPB-affected area or a water year with above average snowpack.

  3. A Review of Recent Advances in Research on Extreme Heat Events

    NASA Technical Reports Server (NTRS)

    Horton, Radley M.; Mankin, Justin S.; Lesk, Corey; Coffel, Ethan; Raymond, Colin

    2016-01-01

    Reviewing recent literature, we report that changes in extreme heat event characteristics such as magnitude, frequency, and duration are highly sensitive to changes in mean global-scale warming. Numerous studies have detected significant changes in the observed occurrence of extreme heat events, irrespective of how such events are defined. Further, a number of these studies have attributed present-day changes in the risk of individual heat events and the documented global-scale increase in such events to anthropogenic-driven warming. Advances in process-based studies of heat events have focused on the proximate land-atmosphere interactions through soil moisture anomalies, and changes in occurrence of the underlying atmospheric circulation associated with heat events in the mid-latitudes. While evidence for a number of hypotheses remains limited, climate change nevertheless points to tail risks of possible changes in heat extremes that could exceed estimates generated from model outputs of mean temperature. We also explore risks associated with compound extreme events and nonlinear impacts associated with extreme heat.

  4. Massively parallel de novo protein design for targeted therapeutics.

    PubMed

    Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J; Hicks, Derrick R; Vergara, Renan; Murapa, Patience; Bernard, Steffen M; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T; Koday, Merika T; Jenkins, Cody M; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M; Fernández-Velasco, D Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A; Fuller, Deborah H; Baker, David

    2017-10-05

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37-43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing.

  5. Automated metadata--final project report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schissel, David

    This report summarizes the work of the Automated Metadata, Provenance Cataloging, and Navigable Interfaces: Ensuring the Usefulness of Extreme-Scale Data Project (MPO Project) funded by the United States Department of Energy (DOE), Offices of Advanced Scientific Computing Research and Fusion Energy Sciences. Initially funded for three years starting in 2012, it was extended for 6 months with additional funding. The project was a collaboration between scientists at General Atomics, Lawrence Berkley National Laboratory (LBNL), and Massachusetts Institute of Technology (MIT). The group leveraged existing computer science technology where possible, and extended or created new capabilities where required. The MPO projectmore » was able to successfully create a suite of software tools that can be used by a scientific community to automatically document their scientific workflows. These tools were integrated into workflows for fusion energy and climate research illustrating the general applicability of the project’s toolkit. Feedback was very positive on the project’s toolkit and the value of such automatic workflow documentation to the scientific endeavor.« less

  6. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    NASA Astrophysics Data System (ADS)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

  7. Massively parallel de novo protein design for targeted therapeutics

    NASA Astrophysics Data System (ADS)

    Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J.; Hicks, Derrick R.; Vergara, Renan; Murapa, Patience; Bernard, Steffen M.; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D.; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T.; Koday, Merika T.; Jenkins, Cody M.; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M.; Fernández-Velasco, D. Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A.; Fuller, Deborah H.; Baker, David

    2017-10-01

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37-43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing.

  8. [Computer-assisted phonetography as a diagnostic aid in functional dysphonia].

    PubMed

    Airainer, R; Klingholz, F

    1991-07-01

    A total of 160 voice-trained and untrained subjects with functional dysphonia were given a "clinical rating" according to their clinical findings. This was a certain value on a scale that recorded the degree of functional voice disorder ranging from a marked hypofunction to an extreme hyperfunction. The phonetograms of these patients were approximated by ellipses, whereby the definition and quantitative recording of several phonetogram parameters were rendered possible. By means of a linear combination of phonetogram parameters, a "calculated assessment" was obtained for each patient that was expected to tally with the "clinical rating". This paper demonstrates that a graduation of the dysphonic clinical picture with regard to the presence of hypofunctional or hyperfunctional components is possible via computerised phonetogram evaluation. In this case, the "calculated assessments" for both male and female singers and non-singers must be computed using different linear combinations. The method can be introduced as a supplementary diagnostic procedure in the diagnosis of functional dysphonia.

  9. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  10. A computer graphics reconstruction and optical analysis of scale anomalies in Caravaggio's Supper at Emmaus

    NASA Astrophysics Data System (ADS)

    Stork, David G.; Furuichi, Yasuo

    2011-03-01

    David Hockney has argued that the right hand of the disciple, thrust to the rear in Caravaggio's Supper at Emmaus (1606), is anomalously large as a result of the artist refocusing a putative secret lens-based optical projector and tracing the image it projected onto his canvas. We show through rigorous optical analysis that to achieve such an anomalously large hand image, Caravaggio would have needed to make extremely large, conspicuous and implausible alterations to his studio setup, moving both his purported lens and his canvas nearly two meters between "exposing" the disciple's left hand and then his right hand. Such major disruptions to his studio would have impeded -not aided- Caravaggio in his work. Our optical analysis quantifies these problems and our computer graphics reconstruction of Caravaggio's studio illustrates these problems. In this way we conclude that Caravaggio did not use optical projections in the way claimed by Hockney, but instead most likely set the sizes of these hands "by eye" for artistic reasons.

  11. Massively parallel de novo protein design for targeted therapeutics

    PubMed Central

    Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J.; Hicks, Derrick R.; Vergara, Renan; Murapa, Patience; Bernard, Steffen M.; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D.; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T.; Koday, Merika T.; Jenkins, Cody M.; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M.; Fernández-Velasco, D. Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A.; Fuller, Deborah H.; Baker, David

    2018-01-01

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37–43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing. PMID:28953867

  12. Motivation, values, and work design as drivers of participation in the R open source project for statistical computing

    PubMed Central

    Mair, Patrick; Hofmann, Eva; Gruber, Kathrin; Hatzinger, Reinhold; Zeileis, Achim; Hornik, Kurt

    2015-01-01

    One of the cornerstones of the R system for statistical computing is the multitude of packages contributed by numerous package authors. This amount of packages makes an extremely broad range of statistical techniques and other quantitative methods freely available. Thus far, no empirical study has investigated psychological factors that drive authors to participate in the R project. This article presents a study of R package authors, collecting data on different types of participation (number of packages, participation in mailing lists, participation in conferences), three psychological scales (types of motivation, psychological values, and work design characteristics), and various socio-demographic factors. The data are analyzed using item response models and subsequent generalized linear models, showing that the most important determinants for participation are a hybrid form of motivation and the social characteristics of the work design. Other factors are found to have less impact or influence only specific aspects of participation. PMID:26554005

  13. Motivation, values, and work design as drivers of participation in the R open source project for statistical computing.

    PubMed

    Mair, Patrick; Hofmann, Eva; Gruber, Kathrin; Hatzinger, Reinhold; Zeileis, Achim; Hornik, Kurt

    2015-12-01

    One of the cornerstones of the R system for statistical computing is the multitude of packages contributed by numerous package authors. This amount of packages makes an extremely broad range of statistical techniques and other quantitative methods freely available. Thus far, no empirical study has investigated psychological factors that drive authors to participate in the R project. This article presents a study of R package authors, collecting data on different types of participation (number of packages, participation in mailing lists, participation in conferences), three psychological scales (types of motivation, psychological values, and work design characteristics), and various socio-demographic factors. The data are analyzed using item response models and subsequent generalized linear models, showing that the most important determinants for participation are a hybrid form of motivation and the social characteristics of the work design. Other factors are found to have less impact or influence only specific aspects of participation.

  14. Nanomagnetic Logic

    NASA Astrophysics Data System (ADS)

    Carlton, David Bryan

    The exponential improvements in speed, energy efficiency, and cost that the computer industry has relied on for growth during the last 50 years are in danger of ending within the decade. These improvements all have relied on scaling the size of the silicon-based transistor that is at the heart of every modern CPU down to smaller and smaller length scales. However, as the size of the transistor reaches scales that are measured in the number of atoms that make it up, it is clear that this scaling cannot continue forever. As a result of this, there has been a great deal of research effort directed at the search for the next device that will continue to power the growth of the computer industry. However, due to the billions of dollars of investment that conventional silicon transistors have received over the years, it is unlikely that a technology will emerge that will be able to beat it outright in every performance category. More likely, different devices will possess advantages over conventional transistors for certain applications and uses. One of these emerging computing platforms is nanomagnetic logic (NML). NML-based circuits process information by manipulating the magnetization states of single-domain nanomagnets coupled to their nearest neighbors through magnetic dipole interactions. The state variable is magnetization direction and computations can take place without passing an electric current. This makes them extremely attractive as a replacement for conventional transistor-based computing architectures for certain ultra-low power applications. In most work to date, nanomagnetic logic circuits have used an external magnetic clocking field to reset the system between computations. The clocking field is then subsequently removed very slowly relative to the magnetization dynamics, guiding the nanomagnetic logic circuit adiabatically into its magnetic ground state. In this dissertation, I will discuss the dynamics behind this process and show that it is greatly influenced by thermal fluctuations. The magnetic ground state containing the answer to the computation is reached by a stochastic process very similar to the thermal annealing of crystalline materials. We will discuss how these dynamics affect the expected reliability, speed, and energy dissipation of NML systems operating under these conditions. Next I will show how a slight change in the properties of the nanomagnets that make up a NML circuit can completely alter the dynamics by which computations take place. The addition of biaxial anisotropy to the magnetic energy landscape creates a metastable state along the hard axis of the nanomagnet. This metastability can be used to remove the stochastic nature of the computation and has large implications for reliability, speed, and energy dissipation which will all be discussed. The changes to NML operation by the addition of biaxial anisotropy introduce new challenges to realizing a commercially viable logic architecture. In the final chapter, I will discuss these challenges and talk about the architectural changes that are necessary to make a working NML circuit based on nanomagnets with biaxial anisotropy.

  15. The association between preceding drought occurrence and heat waves in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Russo, Ana; Gouveia, Célia M.; Ramos, Alexandre M.; Páscoa, Patricia; Trigo, Ricardo M.

    2017-04-01

    A large number of weather driven extreme events has occurred worldwide in the last decade, namely in Europe that has been struck by record breaking extreme events with unprecedented socio-economic impacts, including the mega-heatwaves of 2003 in Europe and 2010 in Russia, and the large droughts in southwestern Europe in 2005 and 2012. The last IPCC report on extreme events points that a changing climate can lead to changes in the frequency, intensity, spatial extent, duration, and timing of weather and climate extremes. These, combined with larger exposure, can result in unprecedented risk to humans and ecosystems. In this context it is becoming increasingly relevant to improve the early identification and predictability of such events, as they negatively affect several socio-economic activities. Moreover, recent diagnostic and modelling experiments have confirmed that hot extremes are often preceded by surface moisture deficits in some regions throughout the world. In this study we analyze if the occurrence of hot extreme months is enhanced by the occurrence of preceding drought events throughout the Mediterranean area. In order to achieve this purpose, the number of hot days in the regions' hottest month will be associated with a drought indicator. The evolution and characterization of drought was analyzed using both the Standardized Precipitation Evaporation Index (SPEI) and the Standardized Precipitation Index (SPI), as obtained from CRU TS3.23 database for the period 1950-2014. We have used both SPI and SPEI for different time scales between 3 and 9 months with a spatial resolution of 0.5°. The number of hot days and nights per month (NHD and NHN) was determined using the ECAD-EOBS daily dataset for the same period and spatial resolution (dataset v14). The NHD and NHN were computed, respectively, as the number of days with a maximum or minimum temperature exceeding the 90th percentile. Results show that the most frequent hottest months for the Mediterranean region occur in July and August. Moreover, the magnitude of correlations between detrended NHD/NHN and the preceding 6- and 9-month SPEI/SPI are usually dimmer than for the 3 month time-scale. Most regions exhibit significantly negative correlations, i.e. high (low) NHD/NHN following negative (positive) SPEI/SPI values, and thus a potential for NHD/NHN early warning. Finally, correlations between the NHD/NHN with SPI and SPEI differ, with SPEI characterized by slightly higher values observed mainly for the 3-months time-scale. Acknowledgments: This work was partially supported by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project IMDROFLOOD (WaterJPI/0004/2014). Ana Russo thanks FCT for granted support (SFRH/BPD/99757/2014). A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012).

  16. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE PAGES

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...

    2018-03-22

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  17. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  18. Development and Initial Validation of an Instrument to Measure Physicians' Use of, Knowledge about, and Attitudes Toward Computers

    PubMed Central

    Cork, Randy D.; Detmer, William M.; Friedman, Charles P.

    1998-01-01

    This paper describes details of four scales of a questionnaire—“Computers in Medical Care”—measuring attributes of computer use, self-reported computer knowledge, computer feature demand, and computer optimism of academic physicians. The reliability (i.e., precision, or degree to which the scale's result is reproducible) and validity (i.e., accuracy, or degree to which the scale actually measures what it is supposed to measure) of each scale were examined by analysis of the responses of 771 full-time academic physicians across four departments at five academic medical centers in the United States. The objectives of this paper were to define the psychometric properties of the scales as the basis for a future demonstration study and, pending the results of further validity studies, to provide the questionnaire and scales to the medical informatics community as a tool for measuring the attitudes of health care providers. Methodology: The dimensionality of each scale and degree of association of each item with the attribute of interest were determined by principal components factor analysis with othogonal varimax rotation. Weakly associated items (factor loading <.40) were deleted. The reliability of each resultant scale was computed using Cronbach's alpha coefficient. Content validity was addressed during scale construction; construct validity was examined through factor analysis and by correlational analyses. Results: Attributes of computer use, computer knowledge, and computer optimism were unidimensional, with the corresponding scales having reliabilities of.79,.91, and.86, respectively. The computer-feature demand attribute differentiated into two dimensions: the first reflecting demand for high-level functionality with reliability of.81 and the second demand for usability with reliability of.69. There were significant positive correlations between computer use, computer knowledge, and computer optimism scale scores and respondents' hands-on computer use, computer training, and self-reported computer sophistication. In addition, items posited on the computer knowledge scale to be more difficult generated significantly lower scores. Conclusion: The four scales of the questionnaire appear to measure with adequate reliability five attributes of academic physicians' attitudes toward computers in medical care: computer use, self-reported computer knowledge, demand for computer functionality, demand for computer usability, and computer optimism. Results of initial validity studies are positive, but further validation of the scales is needed. The URL of a downloadable HTML copy of the questionnaire is provided. PMID:9524349

  19. Final Technical Report for DE-SC0005467

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broccoli, Anthony J.

    2014-09-14

    The objective of this project is to gain a comprehensive understanding of the key atmospheric mechanisms and physical processes associated with temperature extremes in order to better interpret and constrain uncertainty in climate model simulations of future extreme temperatures. To achieve this objective, we first used climate observations and a reanalysis product to identify the key atmospheric circulation patterns associated with extreme temperature days over North America during the late twentieth century. We found that temperature extremes were associated with distinctive signatures in near-surface and mid-tropospheric circulation. The orientations and spatial scales of these circulation anomalies vary with latitude, season,more » and proximity to important geographic features such as mountains and coastlines. We next examined the associations between daily and monthly temperature extremes and large-scale, recurrent modes of climate variability, including the Pacific-North American (PNA) pattern, the northern annular mode (NAM), and the El Niño-Southern Oscillation (ENSO). The strength of the associations are strongest with the PNA and NAM and weaker for ENSO, and also depend upon season, time scale, and location. The associations are stronger in winter than summer, stronger for monthly than daily extremes, and stronger in the vicinity of the centers of action of the PNA and NAM patterns. In the final stage of this project, we compared climate model simulations of the circulation patterns associated with extreme temperature days over North America with those obtained from observations. Using a variety of metrics and self-organizing maps, we found the multi-model ensemble and the majority of individual models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) generally capture the observed patterns well, including their strength and as well as variations with latitude and season. The results from this project indicate that current models are capable of simulating the large-scale meteorological patterns associated with daily temperature extremes and they suggest that such models can be used to evaluate the extent to which changes in atmospheric circulation will influence future changes in temperature extremes.« less

  20. Response Styles in Rating Scales: Simultaneous Modeling of Content-Related Effects and the Tendency to Middle or Extreme Categories

    ERIC Educational Resources Information Center

    Tutz, Gerhard; Berger, Moritz

    2016-01-01

    Heterogeneity in response styles can affect the conclusions drawn from rating scale data. In particular, biased estimates can be expected if one ignores a tendency to middle categories or to extreme categories. An adjacent categories model is proposed that simultaneously models the content-related effects and the heterogeneity in response styles.…

  1. Temporal variability in the suspended sediment load and streamflow of the Doce River

    NASA Astrophysics Data System (ADS)

    Oliveira, Kyssyanne Samihra Santos; Quaresma, Valéria da Silva

    2017-10-01

    Long-term records of streamflow and suspended sediment load provide a better understanding of the evolution of a river mouth, and its adjacent waters and a support for mitigation programs associated with extreme events and engineering projects. The aim of this study is to investigate the temporal variability in the suspended sediment load and streamflow of the Doce River to the Atlantic Ocean, between 1990 and 2013. Streamflow and suspended sediment load were analyzed at the daily, seasonal, and interannual scales. The results showed that at the daily scale, Doce River flood events are due to high intensity and short duration rainfalls, which means that there is a flashy response to rainfall. At the monthly and season scales, approximately 94% of the suspended sediment supply occurs during the wet season. Extreme hydrological events are important for the interannual scale for Doce River sediment supply to the Atlantic Ocean. The results suggest that a summation of anthropogenic interferences (deforestation, urbanization and soil degradation) led to an increase of extreme hydrological events. The findings of this study shows the importance of understanding the typical behavior of the Doce River, allowing the detection of extreme hydrological conditions, its causes and possible environmental and social consequences.

  2. Fast Eigensolver for Computing 3D Earth's Normal Modes

    NASA Astrophysics Data System (ADS)

    Shi, J.; De Hoop, M. V.; Li, R.; Xi, Y.; Saad, Y.

    2017-12-01

    We present a novel parallel computational approach to compute Earth's normal modes. We discretize Earth via an unstructured tetrahedral mesh and apply the continuous Galerkin finite element method to the elasto-gravitational system. To resolve the eigenvalue pollution issue, following the analysis separating the seismic point spectrum, we utilize explicitly a representation of the displacement for describing the oscillations of the non-seismic modes in the fluid outer core. Effectively, we separate out the essential spectrum which is naturally related to the Brunt-Väisälä frequency. We introduce two Lanczos approaches with polynomial and rational filtering for solving this generalized eigenvalue problem in prescribed intervals. The polynomial filtering technique only accesses the matrix pair through matrix-vector products and is an ideal candidate for solving three-dimensional large-scale eigenvalue problems. The matrix-free scheme allows us to deal with fluid separation and self-gravitation in an efficient way, while the standard shift-and-invert method typically needs an explicit shifted matrix and its factorization. The rational filtering method converges much faster than the standard shift-and-invert procedure when computing all the eigenvalues inside an interval. Both two Lanczos approaches solve for the internal eigenvalues extremely accurately, comparing with the standard eigensolver. In our computational experiments, we compare our results with the radial earth model benchmark, and visualize the normal modes using vector plots to illustrate the properties of the displacements in different modes.

  3. Heat-driven liquid metal cooling device for the thermal management of a computer chip

    NASA Astrophysics Data System (ADS)

    Ma, Kun-Quan; Liu, Jing

    2007-08-01

    The tremendous heat generated in a computer chip or very large scale integrated circuit raises many challenging issues to be solved. Recently, liquid metal with a low melting point was established as the most conductive coolant for efficiently cooling the computer chip. Here, by making full use of the double merits of the liquid metal, i.e. superior heat transfer performance and electromagnetically drivable ability, we demonstrate for the first time the liquid-cooling concept for the thermal management of a computer chip using waste heat to power the thermoelectric generator (TEG) and thus the flow of the liquid metal. Such a device consumes no external net energy, which warrants it a self-supporting and completely silent liquid-cooling module. Experiments on devices driven by one or two stage TEGs indicate that a dramatic temperature drop on the simulating chip has been realized without the aid of any fans. The higher the heat load, the larger will be the temperature decrease caused by the cooling device. Further, the two TEGs will generate a larger current if a copper plate is sandwiched between them to enhance heat dissipation there. This new method is expected to be significant in future thermal management of a desk or notebook computer, where both efficient cooling and extremely low energy consumption are of major concern.

  4. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.

  5. Content range and precision of a computer adaptive test of upper extremity function for children with cerebral palsy.

    PubMed

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George; Mulcahey, M J

    2011-02-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized measures: Pediatric Outcomes Data Collection Instrument and Functional Independence Measure for Children. The UE CAT correlated strongly with the upper extremity component of these measures and had greater precision when describing individual functional ability. The UE item bank has wider range with items populating the lower end of the ability spectrum. This new UE item bank and CAT have the capability to quickly assess children of all ages and abilities with good precision and, most importantly, with items that are meaningful and appropriate for their age and level of physical function.

  6. Large-scale drivers of local precipitation extremes in convection-permitting climate simulations

    NASA Astrophysics Data System (ADS)

    Chan, Steven C.; Kendon, Elizabeth J.; Roberts, Nigel M.; Fowler, Hayley J.; Blenkinsop, Stephen

    2016-04-01

    The Met Office 1.5-km UKV convective-permitting models (CPM) is used to downscale present-climate and RCP8.5 60-km HadGEM3 GCM simulations. Extreme UK hourly precipitation intensities increase with local near-surface temperatures and humidity; for temperature, the simulated increase rate for the present-climate simulation is about 6.5% K**-1, which is consistent with observations and theoretical expectations. While extreme intensities are higher in the RCP8.5 simulation as higher temperatures are sampled, there is a decline at the highest temperatures due to circulation and relative humidity changes. Extending the analysis to the broader synoptic scale, it is found that circulation patterns, as diagnosed by MSLP or circulation type, play an increased role in the probability of extreme precipitation in the RCP8.5 simulation. Nevertheless for both CPM simulations, vertical instability is the principal driver for extreme precipitation.

  7. Regional-Scale High-Latitude Extreme Geoelectric Fields Pertaining to Geomagnetically Induced Currents

    NASA Technical Reports Server (NTRS)

    Pulkkinen, Antti; Bernabeu, Emanuel; Eichner, Jan; Viljanen, Ari; Ngwira, Chigomezyo

    2015-01-01

    Motivated by the needs of the high-voltage power transmission industry, we use data from the high-latitude IMAGE magnetometer array to study characteristics of extreme geoelectric fields at regional scales. We use 10-s resolution data for years 1993-2013, and the fields are characterized using average horizontal geoelectric field amplitudes taken over station groups that span about 500-km distance. We show that geoelectric field structures associated with localized extremes at single stations can be greatly different from structures associated with regionally uniform geoelectric fields, which are well represented by spatial averages over single stations. Visual extrapolation and rigorous extreme value analysis of spatially averaged fields indicate that the expected range for 1-in-100-year extreme events are 3-8 V/km and 3.4-7.1 V/km, respectively. The Quebec reference ground model is used in the calculations.

  8. Sensitivity of extreme precipitation to temperature: the variability of scaling factors from a regional to local perspective

    NASA Astrophysics Data System (ADS)

    Schroeer, K.; Kirchengast, G.

    2018-06-01

    Potential increases in extreme rainfall induced hazards in a warming climate have motivated studies to link precipitation intensities to temperature. Increases exceeding the Clausius-Clapeyron (CC) rate of 6-7%/°C-1 are seen in short-duration, convective, high-percentile rainfall at mid latitudes, but the rates of change cease or revert at regionally variable threshold temperatures due to moisture limitations. It is unclear, however, what these findings mean in term of the actual risk of extreme precipitation on a regional to local scale. When conditioning precipitation intensities on local temperatures, key influences on the scaling relationship such as from the annual cycle and regional weather patterns need better understanding. Here we analyze these influences, using sub-hourly to daily precipitation data from a dense network of 189 stations in south-eastern Austria. We find that the temperature sensitivities in the mountainous western region are lower than in the eastern lowlands. This is due to the different weather patterns that cause extreme precipitation in these regions. Sub-hourly and hourly intensities intensify at super-CC and CC-rates, respectively, up to temperatures of about 17 °C. However, we also find that, because of the regional and seasonal variability of the precipitation intensities, a smaller scaling factor can imply a larger absolute change in intensity. Our insights underline that temperature precipitation scaling requires careful interpretation of the intent and setting of the study. When this is considered, conditional scaling factors can help to better understand which influences control the intensification of rainfall with temperature on a regional scale.

  9. The spatiotemporal changes in precipitation extremes over Canada and their connections to large-scale climate patterns

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Gan, T. Y.; Tan, X.

    2017-12-01

    In the past few decades, there have been more extreme climate events around the world, and Canada has also suffered from numerous extreme precipitation events. In this paper, trend analysis, change point analysis, probability distribution function, principal component analysis and wavelet analysis were used to investigate the spatial and temporal patterns of extreme precipitation in Canada. Ten extreme precipitation indices were calculated using long-term daily precipitation data from 164 gauging stations. Several large-scale climate patterns such as El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), Pacific-North American (PNA), and North Atlantic Oscillation (NAO) were selected to analyze the relationships between extreme precipitation and climate indices. Convective Available Potential Energy (CAPE), specific humidity, and surface temperature were employed to investigate the potential causes of the trends.The results show statistically significant positive trends for most indices, which indicate increasing extreme precipitation. The majority of indices display more increasing trends along the southern border of Canada while decreasing trends dominate in the central Canadian Prairies (CP). In addition, strong connections are found between the extreme precipitation and climate indices and the effects of climate pattern differ for each region. The seasonal CAPE, specific humidity, and temperature are found to be closely related to Canadian extreme precipitation.

  10. Mapping snow depth return levels: smooth spatial modeling versus station interpolation

    NASA Astrophysics Data System (ADS)

    Blanchet, J.; Lehning, M.

    2010-12-01

    For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.

  11. HYDROSCAPE: A SCAlable and ParallelizablE Rainfall Runoff Model for Hydrological Applications

    NASA Astrophysics Data System (ADS)

    Piccolroaz, S.; Di Lazzaro, M.; Zarlenga, A.; Majone, B.; Bellin, A.; Fiori, A.

    2015-12-01

    In this work we present HYDROSCAPE, an innovative streamflow routing method based on the travel time approach, and modeled through a fine-scale geomorphological description of hydrological flow paths. The model is designed aimed at being easily coupled with weather forecast or climate models providing the hydrological forcing, and at the same time preserving the geomorphological dispersion of the river network, which is kept unchanged independently on the grid size of rainfall input. This makes HYDROSCAPE particularly suitable for multi-scale applications, ranging from medium size catchments up to the continental scale, and to investigate the effects of extreme rainfall events that require an accurate description of basin response timing. Key feature of the model is its computational efficiency, which allows performing a large number of simulations for sensitivity/uncertainty analyses in a Monte Carlo framework. Further, the model is highly parsimonious, involving the calibration of only three parameters: one defining the residence time of hillslope response, one for channel velocity, and a multiplicative factor accounting for uncertainties in the identification of the potential maximum soil moisture retention in the SCS-CN method. HYDROSCAPE is designed with a simple and flexible modular structure, which makes it particularly prone to massive parallelization, customization according to the specific user needs and preferences (e.g., rainfall-runoff model), and continuous development and improvement. Finally, the possibility to specify the desired computational time step and evaluate streamflow at any location in the domain, makes HYDROSCAPE an attractive tool for many hydrological applications, and a valuable alternative to more complex and highly parametrized large scale hydrological models. Together with model development and features, we present an application to the Upper Tiber River basin (Italy), providing a practical example of model performance and characteristics.

  12. The Science of Cost-Effective Materials Design - A Study in the Development of a High Strength, Impact Resistant Steel

    NASA Astrophysics Data System (ADS)

    Abrahams, Rachel

    2017-06-01

    Intermediate alloy steels are widely used in applications where both high strength and toughness are required for extreme/dynamic loading environments. Steels containing greater than 10% Ni-Co-Mo are amongst the highest strength martensitic steels, due to their high levels of solution strengthening, and preservation of toughness through nano-scaled secondary hardening, semi-coherent hcp-M2 C carbides. While these steels have high yield strengths (σy 0.2 % >1200 MPa) with high impact toughness values (CVN@-40 >30J), they are often cost-prohibitive due to the material and processing cost of nickel and cobalt. Early stage-I steels such as ES-1 (Eglin Steel) were developed in response to the high cost of nickel-cobalt steels and performed well in extreme shock environments due to the presence of analogous nano-scaled hcp-Fe2.4 C epsilon carbides. Unfortunately, the persistence of W-bearing carbides limited the use of ES-1 to relatively thin sections. In this study, we discuss the background and accelerated development cycle of AF96, an alternative Cr-Mo-Ni-Si stage-I temper steel using low-cost heuristic and Integrated Computational Materials Engineering (ICME)-assisted methods. The microstructure of AF96 was tailored to mimic that of ES-1, while reducing stability of detrimental phases and improving ease of processing in industrial environments. AF96 is amenable to casting and forging, deeply hardenable, and scalable to 100,000 kg melt quantities. When produced at the industrial scale, it was found that AF96 exhibits near-statistically identical mechanical properties to ES-1 at 50% of the cost.

  13. Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma

    NASA Astrophysics Data System (ADS)

    Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.

    2018-01-01

    High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.

  14. Effects of precision demands and mental pressure on muscle activation and hand forces in computer mouse tasks.

    PubMed

    Visser, Bart; De Looze, Michiel; De Graaff, Matthijs; Van Dieën, Jaap

    2004-02-05

    The objective of the present study was to gain insight into the effects of precision demands and mental pressure on the load of the upper extremity. Two computer mouse tasks were used: an aiming and a tracking task. Upper extremity loading was operationalized as the myo-electric activity of the wrist flexor and extensor and of the trapezius descendens muscles and the applied grip- and click-forces on the computer mouse. Performance measures, reflecting the accuracy in both tasks and the clicking rate in the aiming task, indicated that the levels of the independent variables resulted in distinguishable levels of accuracy and work pace. Precision demands had a small effect on upper extremity loading with a significant increase in the EMG-amplitudes (21%) of the wrist flexors during the aiming tasks. Precision had large effects on performance. Mental pressure had substantial effects on EMG-amplitudes with an increase of 22% in the trapezius when tracking and increases of 41% in the trapezius and 45% and 140% in the wrist extensors and flexors, respectively, when aiming. During aiming, grip- and click-forces increased by 51% and 40% respectively. Mental pressure had small effects on accuracy but large effects on tempo during aiming. Precision demands and mental pressure in aiming and tracking tasks with a computer mouse were found to coincide with increased muscle activity in some upper extremity muscles and increased force exertion on the computer mouse. Mental pressure caused significant effects on these parameters more often than precision demands. Precision and mental pressure were found to have effects on performance, with precision effects being significant for all performance measures studied and mental pressure effects for some of them. The results of this study suggest that precision demands and mental pressure increase upper extremity load, with mental pressure effects being larger than precision effects. The possible role of precision demands as an indirect mental stressor in working conditions is discussed.

  15. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices.

    PubMed

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag 5 In 5 Sb 60 Te 30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  16. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices

    NASA Astrophysics Data System (ADS)

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag5In5Sb60Te30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  17. The Effectiveness of a Computer Game-Based Rehabilitation Platform for Children With Cerebral Palsy: Protocol for a Randomized Clinical Trial

    PubMed Central

    Parmar, Sanjay; Gandhi, Dorcas BC; Rempel, Gina Ruth; Restall, Gayle; Sharma, Monika; Narayan, Amitesh; Pandian, Jeyaraj; Naik, Nilashri; Savadatti, Ravi R; Kamate, Mahesh Appasaheb

    2017-01-01

    Background It is difficult to engage young children with cerebral palsy (CP) in repetitive, tedious therapy. As such, there is a need for innovative approaches and tools to motivate these children. We developed the low-cost, computer game-based rehabilitation platform CGR that combines fine manipulation and gross movement exercises with attention and planning game activities appropriate for young children with CP. Objective The objective of this study is to provide evidence of the therapeutic value of CGR to improve upper extremity (UE) motor function for children with CP. Methods This randomized controlled, single-blind, clinical trial with an active control arm will be conducted at 4 sites. Children diagnosed with CP between the ages of 4 and 10 years old with moderate UE impairments and fine motor control abnormalities will be recruited. Results We will test the difference between experimental and control groups using the Quality of Upper Extremity Skills Test (QUEST) and Peabody Developmental Motor Scales, Second Edition (PDMS-2) outcome measures. The parents of the children and the therapist experiences with the interventions and tools will be explored using semi-structured interviews using the qualitative description approach. Conclusions This research protocol, if effective, will provide evidence for the therapeutic value and feasibility of CGR in the pediatric rehabilitation of UE function. Trial Registration Clinicaltrials.gov NCT02728375; http:https://clinicaltrials.gov/ct2/show/NCT02728375 (Archived by WebCite at http://www.webcitation.org/6qDjvszvh) PMID:28526673

  18. Assessing the impact of future climate extremes on the US corn and soybean production

    NASA Astrophysics Data System (ADS)

    Jin, Z.

    2015-12-01

    Future climate changes will place big challenges to the US agricultural system, among which increasing heat stress and precipitation variability were the two major concerns. Reliable prediction of crop productions in response to the increasingly frequent and severe extreme climate is a prerequisite for developing adaptive strategies on agricultural risk management. However, the progress has been slow on quantifying the uncertainty of computational predictions at high spatial resolutions. Here we assessed the risks of future climate extremes on the US corn and soybean production using the Agricultural Production System sIMulator (APSIM) model under different climate scenarios. To quantify the uncertainty due to conceptual representations of heat, drought and flooding stress in crop models, we proposed a new strategy of algorithm ensemble in which different methods for simulating crop responses to those extreme climatic events were incorporated into the APSIM. This strategy allowed us to isolate irrelevant structure differences among existing crop models but only focus on the process of interest. Future climate inputs were derived from high-spatial-resolution (12km × 12km) Weather Research and Forecasting (WRF) simulations under Representative Concentration Pathways 4.5 (RCP 4.5) and 8.5 (RCP 8.5). Based on crop model simulations, we analyzed the magnitude and frequency of heat, drought and flooding stress for the 21st century. We also evaluated the water use efficiency and water deficit on regional scales if farmers were to boost their yield by applying more fertilizers. Finally we proposed spatially explicit adaptation strategies of irrigation and fertilizing for different management zones.

  19. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  20. Convective phenomena at high resolution over Europe and the Mediterranean. The join EURO-CORDEX and Med-CORDEX flagship pilot study

    NASA Astrophysics Data System (ADS)

    Coppola, Erika; Sobolowski, Stefan

    2017-04-01

    The join EURO-CORDEX and Med-CORDEX Flagship Pilot Study dedicated to the frontier research of using convective permitting models to address the impact of human induced climate change on convection, has been recently approved and the scientific community behind the project is made of 30 different scientific institutes distributed all around Europe. The motivations for such a challenge is the availability of large field campaigns dedicated to the study of heavy precipitation events such as HyMeX and high resolution dense observation networks like WegnerNet, RdisaggH (CH),COMEPHORE (Fr), SAFRAN (Fr), EURO4M-APGD (CH); the increased computing capacity and model developments; the emerging trend signals in extreme precipitation at daily and mainly sub-daily time scale in the Mediterranean and Alpine regions and the priority of convective extreme events under the WCRP Grand Challenge on climate extremes, because they carry both society-relevant and scientific challenges. The main objective of this effort are to investigate convective-scale events, their processes and their changes in a few key regions of Europe and the Mediterranean using convection-permitting RCMs, statistical models and available observations. To provide a collective assessment of the modeling capacity at convection-permitting scale and to shape a coherent and collective assessment of the consequences of climate change on convective event impacts at local to regional scales. The scientific aims of this research are to investigate how the convective events and the damaging phenomena associated with them will respond to changing climate conditions in several European regions with different climates. To understand if an improved representation of convective phenomena at convective permitting scales will lead to upscaled added value and finally to assess the possibility to replace these costly convection-permitting experiments with statistical approaches like "convection emulators". The common initial domain will be an extended Alpine domain and all the groups will simulate a minimum of 10 years period with ERA-interim boundary conditions, with the possibility of other two sub-domains one in the Northwest continental Europe and another in the Southeast Mediterranean. The scenario simulations will be completed for three different 10 years time slices one in the historical period, one in the near future and the last one in the far future for the RCP8.5 scenario. The first target of this scientific community is to have an ensemble of 1-2 years ERA-interim simulations ready by next summer.

  1. Neo-deterministic definition of earthquake hazard scenarios: a multiscale application to India

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Magrin, Andrea; Parvez, Imtiyaz A.; Rastogi, Bal K.; Vaccari, Franco; Cozzini, Stefano; Bisignano, Davide; Romanelli, Fabio; Panza, Giuliano F.; Ashish, Mr; Mir, Ramees R.

    2014-05-01

    The development of effective mitigation strategies requires scientifically consistent estimates of seismic ground motion; recent analysis, however, showed that the performances of the classical probabilistic approach to seismic hazard assessment (PSHA) are very unsatisfactory in anticipating ground shaking from future large earthquakes. Moreover, due to their basic heuristic limitations, the standard PSHA estimates are by far unsuitable when dealing with the protection of critical structures (e.g. nuclear power plants) and cultural heritage, where it is necessary to consider extremely long time intervals. Nonetheless, the persistence in resorting to PSHA is often explained by the need to deal with uncertainties related with ground shaking and earthquakes recurrence. We show that current computational resources and physical knowledge of the seismic waves generation and propagation processes, along with the improving quantity and quality of geophysical data, allow nowadays for viable numerical and analytical alternatives to the use of PSHA. The advanced approach considered in this study, namely the NDSHA (neo-deterministic seismic hazard assessment), is based on the physically sound definition of a wide set of credible scenario events and accounts for uncertainties and earthquakes recurrence in a substantially different way. The expected ground shaking due to a wide set of potential earthquakes is defined by means of full waveforms modelling, based on the possibility to efficiently compute synthetic seismograms in complex laterally heterogeneous anelastic media. In this way a set of scenarios of ground motion can be defined, either at national and local scale, the latter considering the 2D and 3D heterogeneities of the medium travelled by the seismic waves. The efficiency of the NDSHA computational codes allows for the fast generation of hazard maps at the regional scale even on a modern laptop computer. At the scenario scale, quick parametric studies can be easily performed to understand the influence of the model characteristics on the computed ground shaking scenarios. For massive parametric tests, or for the repeated generation of large scale hazard maps, the methodology can take advantage of more advanced computational platforms, ranging from GRID computing infrastructures to HPC dedicated clusters up to Cloud computing. In such a way, scientists can deal efficiently with the variety and complexity of the potential earthquake sources, and perform parametric studies to characterize the related uncertainties. NDSHA provides realistic time series of expected ground motion readily applicable for seismic engineering analysis and other mitigation actions. The methodology has been successfully applied to strategic buildings, lifelines and cultural heritage sites, and for the purpose of seismic microzoning in several urban areas worldwide. A web application is currently being developed that facilitates the access to the NDSHA methodology and the related outputs by end-users, who are interested in reliable territorial planning and in the design and construction of buildings and infrastructures in seismic areas. At the same, the web application is also shaping up as an advanced educational tool to explore interactively how seismic waves are generated at the source, propagate inside structural models, and build up ground shaking scenarios. We illustrate the preliminary results obtained from a multiscale application of NDSHA approach to the territory of India, zooming from large scale hazard maps of ground shaking at bedrock, to the definition of local scale earthquake scenarios for selected sites in the Gujarat state (NW India). The study aims to provide the community (e.g. authorities and engineers) with advanced information for earthquake risk mitigation, which is particularly relevant to Gujarat in view of the rapid development and urbanization of the region.

  2. Sensitivity of Rainfall Extremes Under Warming Climate in Urban India

    NASA Astrophysics Data System (ADS)

    Ali, H.; Mishra, V.

    2017-12-01

    Extreme rainfall events in urban India halted transportation, damaged infrastructure, and affected human lives. Rainfall extremes are projected to increase under the future climate. We evaluated the relationship (scaling) between rainfall extremes at different temporal resolutions (daily, 3-hourly, and 30 minutes), daily dewpoint temperature (DPT) and daily air temperature at 850 hPa (T850) for 23 urban areas in India. Daily rainfall extremes obtained from Global Surface Summary of Day Data (GSOD) showed positive regression slopes for most of the cities with median of 14%/K for the period of 1979-2013 for DPT and T850, which is higher than Clausius-Clapeyron (C-C) rate ( 7%). Moreover, sub-daily rainfall extremes are more sensitive to both DPT and T850. For instance, 3-hourly rainfall extremes obtained from Tropical Rainfall Measurement Mission (TRMM 3B42 V7) showed regression slopes more than 16%/K aginst DPT and T850 for the period of 1998-2015. Half-hourly rainfall extremes from the Integrated Multi-satellitE Retrievals (IMERGE) of Global precipitation mission (GPM) also showed higher sensitivity against changes in DPT and T850. The super scaling of rainfall extremes against changes in DPT and T850 can be attributed to convective nature of precipitation in India. Our results show that urban India may witness non-stationary rainfall extremes, which, in turn will affect stromwater designs and frequency and magniture of urban flooding.

  3. Evaluation of seabed mapping methods for fine-scale classification of extremely shallow benthic habitats - Application to the Venice Lagoon, Italy

    NASA Astrophysics Data System (ADS)

    Montereale Gavazzi, G.; Madricardo, F.; Janowski, L.; Kruss, A.; Blondel, P.; Sigovini, M.; Foglini, F.

    2016-03-01

    Recent technological developments of multibeam echosounder systems (MBES) allow mapping of benthic habitats with unprecedented detail. MBES can now be employed in extremely shallow waters, challenging data acquisition (as these instruments were often designed for deeper waters) and data interpretation (honed on datasets with resolution sometimes orders of magnitude lower). With extremely high-resolution bathymetry and co-located backscatter data, it is now possible to map the spatial distribution of fine scale benthic habitats, even identifying the acoustic signatures of single sponges. In this context, it is necessary to understand which of the commonly used segmentation methods is best suited to account for such level of detail. At the same time, new sampling protocols for precisely geo-referenced ground truth data need to be developed to validate the benthic environmental classification. This study focuses on a dataset collected in a shallow (2-10 m deep) tidal channel of the Lagoon of Venice, Italy. Using 0.05-m and 0.2-m raster grids, we compared a range of classifications, both pixel-based and object-based approaches, including manual, Maximum Likelihood Classifier, Jenks Optimization clustering, textural analysis and Object Based Image Analysis. Through a comprehensive and accurately geo-referenced ground truth dataset, we were able to identify five different classes of the substrate composition, including sponges, mixed submerged aquatic vegetation, mixed detritic bottom (fine and coarse) and unconsolidated bare sediment. We computed estimates of accuracy (namely Overall, User, Producer Accuracies and the Kappa statistic) by cross tabulating predicted and reference instances. Overall, pixel based segmentations produced the highest accuracies and the accuracy assessment is strongly dependent on the number of classes chosen for the thematic output. Tidal channels in the Venice Lagoon are extremely important in terms of habitats and sediment distribution, particularly within the context of the new tidal barrier being built. However, they had remained largely unexplored until now, because of the surveying challenges. The application of this remote sensing approach, combined with targeted sampling, opens a new perspective in the monitoring of benthic habitats in view of a knowledge-based management of natural resources in shallow coastal areas.

  4. Porting Extremely Lightweight Intrusion Detection (ELIDe) to Android

    DTIC Science & Technology

    2015-10-01

    ARL-TN-0681 ● OCT 2015 US Army Research Laboratory Porting Extremely Lightweight Intrusion Detection (ELIDe) to Android by...Lightweight Intrusion Detection (ELIDe) to Android by Ken F Yu and Garret S Payer Computational and Information Sciences Directorate, ARL...

  5. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less

  6. Comparative evaluation of the IPCC AR5 CMIP5 versus the AR4 CMIP3 model ensembles for regional precipitation and their extremes over South America

    NASA Astrophysics Data System (ADS)

    Tolen, J.; Kodra, E. A.; Ganguly, A. R.

    2011-12-01

    The assertion that higher-resolution experiments or more sophisticated process models within the IPCC AR5 CMIP5 suite of global climate model ensembles improves precipitation projections over the IPCC AR4 CMIP3 suite remains a hypothesis that needs to be rigorously tested. The questions are particularly important for local to regional assessments at scales relevant for the management of critical infrastructures and key resources, particularly for the attributes of sever precipitation events, for example, the intensity, frequency and duration of extreme precipitation. Our case study is South America, where precipitation and their extremes play a central role in sustaining natural, built and human systems. To test the hypothesis that CMIP5 improves over CMIP3 in this regard, spatial and temporal measures of prediction skill are constructed and computed by comparing climate model hindcasts with the NCEP-II reanalysis data, considered here as surrogate observations, for the entire globe and for South America. In addition, gridded precipitation observations over South America based on rain gage measurements are considered. The results suggest that the utility of the next-generation of global climate models over the current generation needs to be carefully evaluated on a case-by-case basis before communicating to resource managers and policy makers.

  7. Adaptive mixed reality rehabilitation improves quality of reaching movements more than traditional reaching therapy following stroke.

    PubMed

    Duff, Margaret; Chen, Yinpeng; Cheng, Long; Liu, Sheng-Min; Blake, Paul; Wolf, Steven L; Rikakis, Thanassis

    2013-05-01

    Adaptive mixed reality rehabilitation (AMRR) is a novel integration of motion capture technology and high-level media computing that provides precise kinematic measurements and engaging multimodal feedback for self-assessment during a therapeutic task. We describe the first proof-of-concept study to compare outcomes of AMRR and traditional upper-extremity physical therapy. Two groups of participants with chronic stroke received either a month of AMRR therapy (n = 11) or matched dosing of traditional repetitive task therapy (n = 10). Participants were right handed, between 35 and 85 years old, and could independently reach to and at least partially grasp an object in front of them. Upper-extremity clinical scale scores and kinematic performances were measured before and after treatment. Both groups showed increased function after therapy, demonstrated by statistically significant improvements in Wolf Motor Function Test and upper-extremity Fugl-Meyer Assessment (FMA) scores, with the traditional therapy group improving significantly more on the FMA. However, only participants who received AMRR therapy showed a consistent improvement in kinematic measurements, both for the trained task of reaching to grasp a cone and the untrained task of reaching to push a lighted button. AMRR may be useful in improving both functionality and the kinematics of reaching. Further study is needed to determine if AMRR therapy induces long-term changes in movement quality that foster better functional recovery.

  8. Collective input/output under memory constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yin; Chen, Yong; Zhuang, Yu

    2014-12-18

    Compared with current high-performance computing (HPC) systems, exascale systems are expected to have much less memory per node, which can significantly reduce necessary collective input/output (I/O) performance. In this study, we introduce a memory-conscious collective I/O strategy that takes into account memory capacity and bandwidth constraints. The new strategy restricts aggregation data traffic within disjointed subgroups, coordinates I/O accesses in intranode and internode layers, and determines I/O aggregators at run time considering memory consumption among processes. We have prototyped the design and evaluated it with commonly used benchmarks to verify its potential. The evaluation results demonstrate that this strategy holdsmore » promise in mitigating the memory pressure, alleviating the contention for memory bandwidth, and improving the I/O performance for projected extreme-scale systems. Given the importance of supporting increasingly data-intensive workloads and projected memory constraints on increasingly larger scale HPC systems, this new memory-conscious collective I/O can have a significant positive impact on scientific discovery productivity.« less

  9. Quantitative evaluation of phonetograms in the case of functional dysphonia.

    PubMed

    Airainer, R; Klingholz, F

    1993-06-01

    According to the laryngeal clinical findings, figures making up a scale were assigned to vocally trained and vocally untrained persons suffering from different types of functional dysphonia. The different types of dysphonia--from the manifested hypofunctional to the extreme hyperfunctional dysphonia--were classified by means of this scale. Besides, the subjects' phonetograms were measured and approximated by three ellipses, what rendered possible the definition of phonetogram parameters. The combining of selected phonetogram parameters to linear combinations served the purpose of a phonetographic evaluation. The linear combinations were to bring phonetographic and clinical evaluations into correspondence as accurately as possible. It was necessary to use different kinds of linear combinations for male and female singers and nonsingers. As a result of the reclassification of 71 and the new classification of 89 patients, it was possible to graduate the types of functional dysphonia by means of computer-aided phonetogram evaluation with a clinically acceptable error rate. This method proved to be an important supplement to the conventional diagnostics of functional dysphonia.

  10. Direct EUV/X-Ray Modulation of the Ionosphere During the August 2017 Total Solar Eclipse

    NASA Astrophysics Data System (ADS)

    Mrak, Sebastijan; Semeter, Joshua; Drob, Douglas; Huba, J. D.

    2018-05-01

    The great American total solar eclipse of 21 August 2017 offered a fortuitous opportunity to study the response of the atmosphere and ionosphere using a myriad of ground instruments. We have used the network of U.S. Global Positioning System receivers to examine perturbations in maps of ionospheric total electron content (TEC). Coherent large-scale variations in TEC have been interpreted by others as gravity wave-induced traveling ionospheric disturbances. However, the solar disk had two active regions at that time, one near the center of the disk and one at the edge, which resulted in an irregular illumination pattern in the extreme ultraviolet (EUV)/X-ray bands. Using detailed EUV occultation maps calculated from the National Aeronautics and Space Administration Solar Dynamics Observatory Atmospheric Imaging Assembly images, we show excellent agreement between TEC perturbations and computed gradients in EUV illumination. The results strongly suggest that prominent large-scale TEC disturbances were consequences of direct EUV modulation, rather than gravity wave-induced traveling ionospheric disturbances.

  11. Surface code architecture for donors and dots in silicon with imprecise and nonuniform qubit couplings

    DOE PAGES

    Pica, G.; Lovett, B. W.; Bhatt, R. N.; ...

    2016-01-14

    A scaled quantum computer with donor spins in silicon would benefit from a viable semiconductor framework and a strong inherent decoupling of the qubits from the noisy environment. Coupling neighboring spins via the natural exchange interaction according to current designs requires gate control structures with extremely small length scales. In this work, we present a silicon architecture where bismuth donors with long coherence times are coupled to electrons that can shuttle between adjacent quantum dots, thus relaxing the pitch requirements and allowing space between donors for classical control devices. An adiabatic SWAP operation within each donor/dot pair solves the scalabilitymore » issues intrinsic to exchange-based two-qubit gates, as it does not rely on subnanometer precision in donor placement and is robust against noise in the control fields. In conclusion, we use this SWAP together with well established global microwave Rabi pulses and parallel electron shuttling to construct a surface code that needs minimal, feasible local control.« less

  12. Plasmonic hot carrier dynamics in solid-state and chemical systems for energy conversion

    DOE PAGES

    Narang, Prineha; Sundararaman, Ravishankar; Atwater, Harry A.

    2016-06-11

    Surface plasmons provide a pathway to efficiently absorb and confine light in metallic nanostructures, thereby bridging photonics to the nano scale. The decay of surface plasmons generates energetic ‘hot’ carriers, which can drive chemical reactions or be injected into semiconductors for nano-scale photochemical or photovoltaic energy conversion. Novel plasmonic hot carrier devices and architectures continue to be demonstrated, but the complexity of the underlying processes make a complete microscopic understanding of all the mechanisms and design considerations for such devices extremely challenging.Here,we review the theoretical and computational efforts to understand and model plasmonic hot carrier devices.We split the problem intomore » three steps: hot carrier generation, transport and collection, and review theoretical approaches with the appropriate level of detail for each step along with their predictions. As a result, we identify the key advances necessary to complete the microscopic mechanistic picture and facilitate the design of the next generation of devices and materials for plasmonic energy conversion.« less

  13. Comparative modeling of Bronze Age land use in the Malatya Plain (Turkey)

    NASA Astrophysics Data System (ADS)

    Arıkan, Bülent; Restelli, Francesca Balossi; Masi, Alessia

    2016-03-01

    Computational modeling in archeology has proven to be a useful tool in quantifying changes in the paleoenvironment. This especially useful method combines data from diverse disciplines to answer questions focusing on the complex and non-linear aspects of human-environment interactions. The research presented here uses various proxy records to compare the changes in climate during the Bronze Age in the Malatya Plain in eastern Anatolia, which is situated at the northern extremity of northern Mesopotamia. Extensive agropastoral land use modeling was applied to three sites of different size and function in the Malatya Plain during the Early Bronze Age I period to simulate the varying scale and intensity of human impacts in relation to changes in the level of social organization, demography, and temporal length. The results suggest that even in land use types subjected to a light footprint, the scale and intensity of anthropogenic impacts change significantly in relation to the level of social organization.

  14. A Harder Rain is Going to Fall: Challenges for Actionable Projections of Extremes

    NASA Astrophysics Data System (ADS)

    Collins, W.

    2014-12-01

    Hydrometeorological extremes are projected to increase in both severity and frequency as the Earth's surface continues to warm in response to anthropogenic emissions of greenhouse gases. These extremes will directly affect the availability and reliability of water and other critical resources. The most comprehensive suite of multi-model projections has been assembled under the Coupled Model Intercomparison Project version 5 (CMIP5) and assessed in the Fifth Assessment (AR5) of the Intergovernmental Panel on Climate Change (IPCC). In order for these projections to be actionable, the projections should exhibit consistency and fidelity down to the local length and timescales required for operational resource planning, for example the scales relevant for water allocations from a major watershed. In this presentation, we summarize the length and timescales relevant for resource planning and then use downscaled versions of the IPCC simulations over the contiguous United States to address three questions. First, over what range of scales is there quantitative agreement between the simulated historical extremes and in situ measurements? Second, does this range of scales in the historical and future simulations overlap with the scales relevant for resource management and adaptation? Third, does downscaling enhance the degree of multi-model consistency at scales smaller than the typical global model resolution? We conclude by using these results to highlight requirements for further model development to make the next generation of models more useful for planning purposes.

  15. Scaling of flow and transport behavior in heterogeneous groundwater systems

    NASA Astrophysics Data System (ADS)

    Scheibe, Timothy; Yabusaki, Steven

    1998-11-01

    Three-dimensional numerical simulations using a detailed synthetic hydraulic conductivity field developed from geological considerations provide insight into the scaling of subsurface flow and transport processes. Flow and advective transport in the highly resolved heterogeneous field were modeled using massively parallel computers, providing a realistic baseline for evaluation of the impacts of parameter scaling. Upscaling of hydraulic conductivity was performed at a variety of scales using a flexible power law averaging technique. A series of tests were performed to determine the effects of varying the scaling exponent on a number of metrics of flow and transport behavior. Flow and transport simulation on high-performance computers and three-dimensional scientific visualization combine to form a powerful tool for gaining insight into the behavior of complex heterogeneous systems. Many quantitative groundwater models utilize upscaled hydraulic conductivity parameters, either implicitly or explicitly. These parameters are designed to reproduce the bulk flow characteristics at the grid or field scale while not requiring detailed quantification of local-scale conductivity variations. An example from applied groundwater modeling is the common practice of calibrating grid-scale model hydraulic conductivity or transmissivity parameters so as to approximate observed hydraulic head and boundary flux values. Such parameterizations, perhaps with a bulk dispersivity imposed, are then sometimes used to predict transport of reactive or non-reactive solutes. However, this work demonstrates that those parameters that lead to the best upscaling for hydraulic conductivity and head do not necessarily correspond to the best upscaling for prediction of a variety of transport behaviors. This result reflects the fact that transport is strongly impacted by the existence and connectedness of extreme-valued hydraulic conductivities, in contrast to bulk flow which depends more strongly on mean values. It provides motivation for continued research into upscaling methods for transport that directly address advection in heterogeneous porous media. An electronic version of this article is available online at the journal's homepage at http://www.elsevier.nl/locate/advwatres or http://www.elsevier.com/locate/advwatres (see "Special section on vizualization". The online version contains additional supporting information, graphics, and a 3D animation of simulated particle movement. Limited. All rights reserved

  16. A Test of the Validity of Inviscid Wall-Modeled LES

    NASA Astrophysics Data System (ADS)

    Redman, Andrew; Craft, Kyle; Aikens, Kurt

    2015-11-01

    Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  17. Making Advanced Scientific Algorithms and Big Scientific Data Management More Accessible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkatakrishnan, S. V.; Mohan, K. Aditya; Beattie, Keith

    2016-02-14

    Synchrotrons such as the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory are known as user facilities. They are sources of extremely bright X-ray beams, and scientists come from all over the world to perform experiments that require these beams. As the complexity of experiments has increased, and the size and rates of data sets has exploded, managing, analyzing and presenting the data collected at synchrotrons has been an increasing challenge. The ALS has partnered with high performance computing, fast networking, and applied mathematics groups to create a"super-facility", giving users simultaneous access to the experimental, computational, and algorithmic resourcesmore » to overcome this challenge. This combination forms an efficient closed loop, where data despite its high rate and volume is transferred and processed, in many cases immediately and automatically, on appropriate compute resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beam-time. In this paper, We will present work done on advanced tomographic reconstruction algorithms to support users of the 3D micron-scale imaging instrument (Beamline 8.3.2, hard X-ray micro-tomography).« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolme, David S; Mikkilineni, Aravind K; Rose, Derek C

    Analog computational circuits have been demonstrated to provide substantial improvements in power and speed relative to digital circuits, especially for applications requiring extreme parallelism but only modest precision. Deep machine learning is one such area and stands to benefit greatly from analog and mixed-signal implementations. However, even at modest precisions, offsets and non-linearity can degrade system performance. Furthermore, in all but the simplest systems, it is impossible to directly measure the intermediate outputs of all sub-circuits. The result is that circuit designers are unable to accurately evaluate the non-idealities of computational circuits in-situ and are therefore unable to fully utilizemore » measurement results to improve future designs. In this paper we present a technique to use deep learning frameworks to model physical systems. Recently developed libraries like TensorFlow make it possible to use back propagation to learn parameters in the context of modeling circuit behavior. Offsets and scaling errors can be discovered even for sub-circuits that are deeply embedded in a computational system and not directly observable. The learned parameters can be used to refine simulation methods or to identify appropriate compensation strategies. We demonstrate the framework using a mixed-signal convolution operator as an example circuit.« less

  19. Optimization of a Small Scale Linear Reluctance Accelerator

    NASA Astrophysics Data System (ADS)

    Barrera, Thor; Beard, Robby

    2011-11-01

    Reluctance accelerators are extremely promising future methods of transportation. Several problems still plague these devices, most prominently low efficiency. Variables to overcoming efficiency problems are many and difficult to correlate how they affect our accelerator. The study examined several differing variables that present potential challenges in optimizing the efficiency of reluctance accelerators. These include coil and projectile design, power supplies, switching, and the elusive gradient inductance problem. Extensive research in these areas has been performed from computational and theoretical to experimental. Findings show that these parameters share significant similarity to transformer design elements, thus general findings show current optimized parameters the research suggests as a baseline for further research and design. Demonstration of these current findings will be offered at the time of presentation.

  20. High temperature x-ray micro-tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDowell, Alastair A., E-mail: aamacdowell@lbl.gov; Barnard, Harold; Parkinson, Dilworth Y.

    2016-07-27

    There is increasing demand for 3D micro-scale time-resolved imaging of samples in realistic - and in many cases extreme environments. The data is used to understand material response, validate and refine computational models which, in turn, can be used to reduce development time for new materials and processes. Here we present the results of high temperature experiments carried out at the x-ray micro-tomography beamline 8.3.2 at the Advanced Light Source. The themes involve material failure and processing at temperatures up to 1750°C. The experimental configurations required to achieve the requisite conditions for imaging are described, with examples of ceramic matrixmore » composites, spacecraft ablative heat shields and nuclear reactor core Gilsocarbon graphite.« less

  1. Sensing of single electrons using micro and nano technologies: a review

    NASA Astrophysics Data System (ADS)

    Jalil, Jubayer; Zhu, Yong; Ekanayake, Chandima; Ruan, Yong

    2017-04-01

    During the last three decades, the remarkable dynamic features of microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS), and advances in solid-state electronics hold much potential for the fabrication of extremely sensitive charge sensors. These sensors have a broad range of applications, such as those involving the measurement of ionization radiation, detection of bio-analyte and aerosol particles, mass spectrometry, scanning tunneling microscopy, and quantum computation. Designing charge sensors (also known as charge electrometers) for electrometry is deemed significant because of the sensitivity and resolution issues in the range of micro- and nano-scales. This article reviews the development of state-of-the-art micro- and nano-charge sensors, and discusses their technological challenges for practical implementation.

  2. Analysis of Environmental Stress Factors Using an Artificial Growth System and Plant Fitness Optimization

    PubMed Central

    Lee, Meonghun; Yoe, Hyun

    2015-01-01

    The environment promotes evolution. Evolutionary processes represent environmental adaptations over long time scales; evolution of crop genomes is not inducible within the relatively short time span of a human generation. Extreme environmental conditions can accelerate evolution, but such conditions are often stress inducing and disruptive. Artificial growth systems can be used to induce and select genomic variation by changing external environmental conditions, thus, accelerating evolution. By using cloud computing and big-data analysis, we analyzed environmental stress factors for Pleurotus ostreatus by assessing, evaluating, and predicting information of the growth environment. Through the indexing of environmental stress, the growth environment can be precisely controlled and developed into a technology for improving crop quality and production. PMID:25874206

  3. Collective bubble oscillations as a component of surf infrasound.

    PubMed

    Park, Joseph; Garcés, Milton; Fee, David; Pawlak, Geno

    2008-05-01

    Plunging surf is a known generator of infrasound, though the mechanisms have not been clearly identified. A model based on collective bubble oscillations created by demise of the initially entrained air pocket is examined. Computed spectra are compared to infrasound data from the island of Kauai during periods of medium, large, and extreme surf. Model results suggest that bubble oscillations generated by plunging waves are plausible generators of infrasound, and that dynamic bubble plume evolution on a temporal scale comparable to the breaking wave period may contribute to the broad spectral lobe of dominant infrasonic energy observed in measured data. Application of an inverse model has potential to characterize breaking wave size distributions, energy, and temporal changes in seafloor morphology based on remotely sensed infrasound.

  4. Off the scale: a new species of fish-scale gecko (Squamata: Gekkonidae: Geckolepis) with exceptionally large scales

    PubMed Central

    Daza, Juan D.; Köhler, Jörn; Vences, Miguel; Glaw, Frank

    2017-01-01

    The gecko genus Geckolepis, endemic to Madagascar and the Comoro archipelago, is taxonomically challenging. One reason is its members ability to autotomize a large portion of their scales when grasped or touched, most likely to escape predation. Based on an integrative taxonomic approach including external morphology, morphometrics, genetics, pholidosis, and osteology, we here describe the first new species from this genus in 75 years: Geckolepis megalepis sp. nov. from the limestone karst of Ankarana in northern Madagascar. The new species has the largest known body scales of any gecko (both relatively and absolutely), which come off with exceptional ease. We provide a detailed description of the skeleton of the genus Geckolepis based on micro-Computed Tomography (micro-CT) analysis of the new species, the holotype of G. maculata, the recently resurrected G. humbloti, and a specimen belonging to an operational taxonomic unit (OTU) recently suggested to represent G. maculata. Geckolepis is characterized by highly mineralized, imbricated scales, paired frontals, and unfused subolfactory processes of the frontals, among other features. We identify diagnostic characters in the osteology of these geckos that help define our new species and show that the OTU assigned to G. maculata is probably not conspecific with it, leaving the taxonomic identity of this species unclear. We discuss possible reasons for the extremely enlarged scales of G. megalepis in the context of an anti-predator defence mechanism, and the future of Geckolepis taxonomy. PMID:28194313

  5. Measurement of electromagnetic waves in ELF and VLF bands to monitor lightning activity in the Maritime Continent

    NASA Astrophysics Data System (ADS)

    Yamashita, Kozo; Takahashi, Yukihiro; Ohya, Hiroyo; Tsuchiya, Fuminori; Sato, Mitsuteru; Matsumoto, Jun

    2013-04-01

    Data of lightning discharge has been focused on as an effective way for monitoring and nowcasting of thunderstorm activity which causes extreme weather. Spatial distribution of lightning discharge has been used as a proxy of the presence or absence of deep convection. Latest observation shows that there is extremely huge lightning whose scale is more than hundreds times bigger than that of averaged event. This result indicates that lightning observation should be carried out to estimate not only existence but also scale for quantitative evaluation of atmospheric convection. In this study, lightning observation network in the Maritime Continent is introduced. This network is consisted of the sensors which make possible to measure electromagnetic wave radiated from lightning discharges. Observation frequency is 0.1 - 40 kHz for the measurement of magnetic field and 1 - 40 kHz for that of electric field. Sampling frequency is 100 kHz. Waveform of electromagnetic wave is recorded by personal computer. We have already constructed observation stations at Tainan in Taiwan (23.1N, 121.1E), Saraburi in Thailand (14.5N, 101.0E), and Pontianak in Indonesia (0.0N, 109.4E). Furthermore, we plan to install the monitoring system at Los Banos in Philippines (14.18, 121.25E) and Hanoi in Viet Nam. Data obtained by multipoint observation is synchronized by GPS receiver installed at each station. By using data obtained by this network, location and scale of lightning discharge can be estimated. Location of lightning is determined based on time of arrival method. Accuracy of geolocation could be less than 10km. Furthermore, charge moment is evaluated as a scale of each lightning discharge. It is calculated from electromagnetic waveform in ELF range (3-30 kHz). At the presentation, we will show the initial result about geolocation for source of electromagnetic wave and derivation of charge moment value based on the measurement of ELF and VLF sferics.

  6. Impact of climate change on European weather extremes

    NASA Astrophysics Data System (ADS)

    Duchez, Aurelie; Forryan, Alex; Hirschi, Joel; Sinha, Bablu; New, Adrian; Freychet, Nicolas; Scaife, Adam; Graham, Tim

    2015-04-01

    An emerging science consensus is that global climate change will result in more extreme weather events with concomitant increasing financial losses. Key questions that arise are: Can an upward trend in natural extreme events be recognised and predicted at the European scale? What are the key drivers within the climate system that are changing and making extreme weather events more frequent, more intense, or both? Using state-of-the-art coupled climate simulations from the UK Met Office (HadGEM3-GC2, historical and future scenario runs) as well as reanalysis data, we highlight the potential of the currently most advanced forecasting systems to progress understanding of the causative drivers of European weather extremes, and assess future frequency and intensity of extreme weather under various climate change scenarios. We characterize European extremes in these simulations using a subset of the 27 core indices for temperature and precipitation from The Expert Team on Climate Change Detection and Indices (Tank et al., 2009). We focus on temperature and precipitation extremes (e.g. extremes in daily and monthly precipitation and temperatures) and relate them to the atmospheric modes of variability over Europe in order to establish the large-scale atmospheric circulation patterns that are conducive to the occurrence of extreme precipitation and temperature events. Klein Tank, Albert M.G., and Francis W. Zwiers. Guidelines on Analysis of Extremes in a Changing Climate in Support of Informed Decisions for Adaptation. WMO-TD No. 1500. Climate Data and Monitoring. World Meteorological Organization, 2009.

  7. Optimized photonic gauge of extreme high vacuum with Petawatt lasers

    NASA Astrophysics Data System (ADS)

    Paredes, Ángel; Novoa, David; Tommasini, Daniele; Mas, Héctor

    2014-03-01

    One of the latest proposed applications of ultra-intense laser pulses is their possible use to gauge extreme high vacuum by measuring the photon radiation resulting from nonlinear Thomson scattering within a vacuum tube. Here, we provide a complete analysis of the process, computing the expected rates and spectra, both for linear and circular polarizations of the laser pulses, taking into account the effect of the time envelope in a slowly varying envelope approximation. We also design a realistic experimental configuration allowing for the implementation of the idea and compute the corresponding geometric efficiencies. Finally, we develop an optimization procedure for this photonic gauge of extreme high vacuum at high repetition rate Petawatt and multi-Petawatt laser facilities, such as VEGA, JuSPARC and ELI.

  8. Climate Change and Hydrological Extreme Events - Risks and Perspectives for Water Management in Bavaria and Québec

    NASA Astrophysics Data System (ADS)

    Ludwig, R.

    2017-12-01

    There is as yet no confirmed knowledge whether and how climate change contributes to the magnitude and frequency of hydrological extreme events and how regional water management could adapt to the corresponding risks. The ClimEx project (2015-2019) investigates the effects of climate change on the meteorological and hydrological extreme events and their implications for water management in Bavaria and Québec. High Performance Computing is employed to enable the complex simulations in a hydro-climatological model processing chain, resulting in a unique high-resolution and transient (1950-2100) dataset of climatological and meteorological forcing and hydrological response: (1) The climate module has developed a large ensemble of high resolution data (12km) of the CRCM5 RCM for Central Europe and North-Eastern North America, downscaled from 50 members of the CanESM2 GCM. The dataset is complemented by all available data from the Euro-CORDEX project to account for the assessment of both natural climate variability and climate change. The large ensemble with several thousand model years provides the potential to catch rare extreme events and thus improves the process understanding of extreme events with return periods of 1000+ years. (2) The hydrology module comprises process-based and spatially explicit model setups (e.g. WaSiM) for all major catchments in Bavaria and Southern Québec in high temporal (3h) and spatial (500m) resolution. The simulations form the basis for in depth analysis of hydrological extreme events based on the inputs from the large climate model dataset. The specific data situation enables to establish a new method for `virtual perfect prediction', which assesses climate change impacts on flood risk and water resources management by identifying patterns in the data which reveal preferential triggers of hydrological extreme events. The presentation will highlight first results from the analysis of the large scale ClimEx model ensemble, showing the current and future ratio of natural variability and climate change impacts on meteorological extreme events. Selected data from the ensemble is used to drive a hydrological model experiment to illustrate the capacity to better determine the recurrence periods of hydrological extreme events under conditions of climate change.

  9. Enabling Co-Design of Multi-Layer Exascale Storage Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carothers, Christopher

    Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less

  10. High-reliability computing for the smarter planet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Graham, Paul; Manuzzato, Andrea

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less

  11. Evaluating sub-seasonal skill in probabilistic forecasts of Atmospheric Rivers and associated extreme events

    NASA Astrophysics Data System (ADS)

    Subramanian, A. C.; Lavers, D.; Matsueda, M.; Shukla, S.; Cayan, D. R.; Ralph, M.

    2017-12-01

    Atmospheric rivers (ARs) - elongated plumes of intense moisture transport - are a primary source of hydrological extremes, water resources and impactful weather along the West Coast of North America and Europe. There is strong demand in the water management, societal infrastructure and humanitarian sectors for reliable sub-seasonal forecasts, particularly of extreme events, such as floods and droughts so that actions to mitigate disastrous impacts can be taken with sufficient lead-time. Many recent studies have shown that ARs in the Pacific and the Atlantic are modulated by large-scale modes of climate variability. Leveraging the improved understanding of how these large-scale climate modes modulate the ARs in these two basins, we use the state-of-the-art multi-model forecast systems such as the North American Multi-Model Ensemble (NMME) and the Subseasonal-to-Seasonal (S2S) database to help inform and assess the probabilistic prediction of ARs and related extreme weather events over the North American and European West Coasts. We will present results from evaluating probabilistic forecasts of extreme precipitation and AR activity at the sub-seasonal scale. In particular, results from the comparison of two winters (2015-16 and 2016-17) will be shown, winters which defied canonical El Niño teleconnection patterns over North America and Europe. We further extend this study to analyze probabilistic forecast skill of AR events in these two basins and the variability in forecast skill during certain regimes of large-scale climate modes.

  12. College students and computers: assessment of usage patterns and musculoskeletal discomfort.

    PubMed

    Noack-Cooper, Karen L; Sommerich, Carolyn M; Mirka, Gary A

    2009-01-01

    A limited number of studies have focused on computer-use-related MSDs in college students, though risk factor exposure may be similar to that of workers who use computers. This study examined computer use patterns of college students, and made comparisons to a group of previously studied computer-using professionals. 234 students completed a web-based questionnaire concerning computer use habits and physical discomfort respondents specifically associated with computer use. As a group, students reported their computer use to be at least 'Somewhat likely' 18 out of 24 h/day, compared to 12 h for the professionals. Students reported more uninterrupted work behaviours than the professionals. Younger graduate students reported 33.7 average weekly computing hours, similar to hours reported by younger professionals. Students generally reported more frequent upper extremity discomfort than the professionals. Frequent assumption of awkward postures was associated with frequent discomfort. The findings signal a need for intervention, including, training and education, prior to entry into the workforce. Students are future workers, and so it is important to determine whether their increasing exposure to computers, prior to entering the workforce, may make it so they enter already injured or do not enter their chosen profession due to upper extremity MSDs.

  13. Projected changes over western Canada using convection-permitting regional climate model and the pseudo-global warming method

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kurkute, S.; Chen, L.

    2017-12-01

    Results from the General Circulation Models (GCMs) suggest more frequent and more severe extreme rain events in a climate warmer than the present. However, current GCMs cannot accurately simulate extreme rainfall events of short duration due to their coarse model resolutions and parameterizations. This limitation makes it difficult to provide the detailed quantitative information for the development of regional adaptation and mitigation strategies. Dynamical downscaling using nested Regional Climate Models (RCMs) are able to capture key regional and local climate processes with an affordable computational cost. Recent studies have demonstrated that the downscaling of GCM results with weather-permitting mesoscale models, such as the pseudo-global warming (PGW) technique, could be a viable and economical approach of obtaining valuable climate change information on regional scales. We have conducted a regional climate 4-km Weather Research and Forecast Model (WRF) simulation with one domain covering the whole western Canada, for a historic run (2000-2015) and a 15-year future run to 2100 and beyond with the PGW forcing. The 4-km resolution allows direct use of microphysics and resolves the convection explicitly, thus providing very convincing spatial detail. With this high-resolution simulation, we are able to study the convective mechanisms, specifically the control of convections over the Prairies, the projected changes of rainfall regimes, and the shift of the convective mechanisms in a warming climate, which has never been examined before numerically at such large scale with such high resolution.

  14. Subsynoptic-scale features associated with extreme surface gusts in UK extratropical cyclone events

    NASA Astrophysics Data System (ADS)

    Earl, N.; Dorling, S.; Starks, M.; Finch, R.

    2017-04-01

    Numerous studies have addressed the mesoscale features within extratropical cyclones (ETCs) that are responsible for the most destructive winds, though few have utilized surface observation data, and most are based on case studies. By using a 39-station UK surface observation network, coupled with in-depth analysis of the causes of extreme gusts during the period 2008-2014, we show that larger-scale features (warm and cold conveyer belts) are most commonly associated with the top 1% of UK gusts but smaller-scale features generate the most extreme winds. The cold conveyor belt is far more destructive when joining the momentum of the ETC, rather than earlier in its trajectory, ahead of the approaching warm front. Sting jets and convective lines account for two thirds of severe surface gusts in the UK.

  15. Drivers and seasonal predictability of extreme wind speeds in the ECMWF System 4 and a statistical model

    NASA Astrophysics Data System (ADS)

    Walz, M. A.; Donat, M.; Leckebusch, G. C.

    2017-12-01

    As extreme wind speeds are responsible for large socio-economic losses in Europe, a skillful prediction would be of great benefit for disaster prevention as well as for the actuarial community. Here we evaluate patterns of large-scale atmospheric variability and the seasonal predictability of extreme wind speeds (e.g. >95th percentile) in the European domain in the dynamical seasonal forecast system ECMWF System 4, and compare to the predictability based on a statistical prediction model. The dominant patterns of atmospheric variability show distinct differences between reanalysis and ECMWF System 4, with most patterns in System 4 extended downstream in comparison to ERA-Interim. The dissimilar manifestations of the patterns within the two models lead to substantially different drivers associated with the occurrence of extreme winds in the respective model. While the ECMWF System 4 is shown to provide some predictive power over Scandinavia and the eastern Atlantic, only very few grid cells in the European domain have significant correlations for extreme wind speeds in System 4 compared to ERA-Interim. In contrast, a statistical model predicts extreme wind speeds during boreal winter in better agreement with the observations. Our results suggest that System 4 does not seem to capture the potential predictability of extreme winds that exists in the real world, and therefore fails to provide reliable seasonal predictions for lead months 2-4. This is likely related to the unrealistic representation of large-scale patterns of atmospheric variability. Hence our study points to potential improvements of dynamical prediction skill by improving the simulation of large-scale atmospheric dynamics.

  16. Transport on percolation clusters with power-law distributed bond strengths.

    PubMed

    Alava, Mikko; Moukarzel, Cristian F

    2003-05-01

    The simplest transport problem, namely finding the maximum flow of current, or maxflow, is investigated on critical percolation clusters in two and three dimensions, using a combination of extremal statistics arguments and exact numerical computations, for power-law distributed bond strengths of the type P(sigma) approximately sigma(-alpha). Assuming that only cutting bonds determine the flow, the maxflow critical exponent v is found to be v(alpha)=(d-1)nu+1/(1-alpha). This prediction is confirmed with excellent accuracy using large-scale numerical simulation in two and three dimensions. However, in the region of anomalous bond capacity distributions (0< or =alpha< or =1) we demonstrate that, due to cluster-structure fluctuations, it is not the cutting bonds but the blobs that set the transport properties of the backbone. This "blob dominance" avoids a crossover to a regime where structural details, the distribution of the number of red or cutting bonds, would set the scaling. The restored scaling exponents, however, still follow the simplistic red bond estimate. This is argued to be due to the existence of a hierarchy of so-called minimum cut configurations, for which cutting bonds form the lowest level, and whose transport properties scale all in the same way. We point out the relevance of our findings to other scalar transport problems (i.e., conductivity).

  17. Springtime extreme moisture transport into the Arctic and its impact on sea ice concentration

    NASA Astrophysics Data System (ADS)

    Yang, Wenchang; Magnusdottir, Gudrun

    2017-05-01

    Recent studies suggest that springtime moisture transport into the Arctic can initiate sea ice melt that extends to a large area in the following summer and fall, which can help explain Arctic sea ice interannual variability. Yet the impact from an individual moisture transport event, especially the extreme ones, is unclear on synoptic to intraseasonal time scales and this is the focus of the current study. Springtime extreme moisture transport into the Arctic from a daily data set is found to be dominant over Atlantic longitudes. Lag composite analysis shows that these extreme events are accompanied by a substantial sea ice concentration reduction over the Greenland-Barents-Kara Seas that lasts around a week. Surface air temperature also becomes anomalously high over these seas and cold to the west of Greenland as well as over the interior Eurasian continent. The blocking weather regime over the North Atlantic is mainly responsible for the extreme moisture transport, occupying more than 60% of the total extreme days, while the negative North Atlantic Oscillation regime is hardly observed at all during the extreme transport days. These extreme moisture transport events appear to be preceded by eastward propagating large-scale tropical convective forcing by as long as 2 weeks but with great uncertainty due to lack of statistical significance.

  18. Statistical downscaling modeling with quantile regression using lasso to estimate extreme rainfall

    NASA Astrophysics Data System (ADS)

    Santri, Dewi; Wigena, Aji Hamim; Djuraidah, Anik

    2016-02-01

    Rainfall is one of the climatic elements with high diversity and has many negative impacts especially extreme rainfall. Therefore, there are several methods that required to minimize the damage that may occur. So far, Global circulation models (GCM) are the best method to forecast global climate changes include extreme rainfall. Statistical downscaling (SD) is a technique to develop the relationship between GCM output as a global-scale independent variables and rainfall as a local- scale response variable. Using GCM method will have many difficulties when assessed against observations because GCM has high dimension and multicollinearity between the variables. The common method that used to handle this problem is principal components analysis (PCA) and partial least squares regression. The new method that can be used is lasso. Lasso has advantages in simultaneuosly controlling the variance of the fitted coefficients and performing automatic variable selection. Quantile regression is a method that can be used to detect extreme rainfall in dry and wet extreme. Objective of this study is modeling SD using quantile regression with lasso to predict extreme rainfall in Indramayu. The results showed that the estimation of extreme rainfall (extreme wet in January, February and December) in Indramayu could be predicted properly by the model at quantile 90th.

  19. Effect of elevation on extreme precipitation of short durations: evidences of orographic signature on the parameters of Depth-Duration-Frequency curves

    NASA Astrophysics Data System (ADS)

    Avanzi, Francesco; De Michele, Carlo; Gabriele, Salvatore; Ghezzi, Antonio; Rosso, Renzo

    2015-04-01

    Here, we show how atmospheric circulation and topography rule the variability of depth-duration-frequency (DDF) curves parameters, and we discuss how this variability has physical implications on the formation of extreme precipitations at high elevations. A DDF is a curve ruling the value of the maximum annual precipitation H as a function of duration D and the level of probability F. We consider around 1500 stations over the Italian territory, with at least 20 years of data of maximum annual precipitation depth at different durations. We estimated the DDF parameters at each location by using the asymptotic distribution of extreme values, i.e. the so-called Generalized Extreme Value (GEV) distribution, and considering a statistical simple scale invariance hypothesis. Consequently, a DDF curve depends on five different parameters. A first set relates H with the duration (namely, the mean value of annual maximum precipitation depth for unit duration and the scaling exponent), while a second set links H to F (namely, a scale, position and shape parameter). The value of the shape parameter has consequences on the type of random variable (unbounded, upper or lower bounded). This extensive analysis shows that the variability of the mean value of annual maximum precipitation depth for unit duration obeys to the coupled effect of topography and modal direction of moisture flux during extreme events. Median values of this parameter decrease with elevation. We called this phenomenon "reverse orographic effect" on extreme precipitation of short durations, since it is in contrast with general knowledge about the orographic effect on mean precipitation. Moreover, the scaling exponent is mainly driven by topography alone (with increasing values of this parameter at increasing elevations). Therefore, the quantiles of H(D,F) at durations greater than unit turn to be more variable at high elevations than at low elevations. Additionally, the analysis of the variability of the shape parameter with elevation shows that extreme events at high elevations appear to be distributed according to an upper bounded probability distribution. These evidences could be a characteristic sign of the formation of extreme precipitation events at high elevations.

  20. Extreme Precipitation and High-Impact Landslides

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia; Adler, Robert; Huffman, George; Peters-Lidard, Christa

    2012-01-01

    It is well known that extreme or prolonged rainfall is the dominant trigger of landslides; however, there remain large uncertainties in characterizing the distribution of these hazards and meteorological triggers at the global scale. Researchers have evaluated the spatiotemporal distribution of extreme rainfall and landslides at local and regional scale primarily using in situ data, yet few studies have mapped rainfall-triggered landslide distribution globally due to the dearth of landslide data and consistent precipitation information. This research uses a newly developed Global Landslide Catalog (GLC) and a 13-year satellite-based precipitation record from Tropical Rainfall Measuring Mission (TRMM) data. For the first time, these two unique products provide the foundation to quantitatively evaluate the co-occurence of precipitation and rainfall-triggered landslides globally. The GLC, available from 2007 to the present, contains information on reported rainfall-triggered landslide events around the world using online media reports, disaster databases, etc. When evaluating this database, we observed that 2010 had a large number of high-impact landslide events relative to previous years. This study considers how variations in extreme and prolonged satellite-based rainfall are related to the distribution of landslides over the same time scales for three active landslide areas: Central America, the Himalayan Arc, and central-eastern China. Several test statistics confirm that TRMM rainfall generally scales with the observed increase in landslide reports and fatal events for 2010 and previous years over each region. These findings suggest that the co-occurrence of satellite precipitation and landslide reports may serve as a valuable indicator for characterizing the spatiotemporal distribution of landslide-prone areas in order to establish a global rainfall-triggered landslide climatology. This research also considers the sources for this extreme rainfall, citing teleconnections from ENSO as likely contributors to regional precipitation variability. This work demonstrates the potential for using satellite-based precipitation estimates to identify potentially active landslide areas at the global scale in order to improve landslide cataloging and quantify landslide triggering at daily, monthly and yearly time scales.

Top